Issues in Detection of AI-Generated Source Code

Date
2024-09-17
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
AI coding assistants hold the promise of revolutionizing software development, but they also introduce new risks. The underlying large language models (LLMs) can generate faulty or insecure code either unintentionally or through malicious attacks. Additionally, there's a potential for AI assistants to inadvertently copy copyrighted code, leading to legal issues. To mitigate risks associated with AI-generated code, it's crucial to track its origin throughout the software supply chain. This requires distinguishing between human and AI-authored code. We first conducted a study investigating the feasibility of using lexical and syntactic features for this purpose. The results of our study were promising, it showed 92% accuracy and encouraged us to enhance our stylometric methods and delve deeper in the problem of detecting AI generated code. Next, we used a larger dataset with a bigger feature set for detecting AI-generated code. Our classifiers achieved up to 93% accuracy on standardized tasks indicating that it is possible to reliably differentiate between human and AI-generated code. Subsequently, we assess the resilience of these methods against adversarial attacks by using the LLM itself as an obfuscation tool. We introduce the concepts of LLM-based obfuscation and alteration attacks, demonstrating their efficacy in evading stylometric detection. The classifiers' performances were notably impacted by both obfuscation and alteration attacks. Recall scores dipped to 58% for obfuscation and 73% for alteration compared to the scores of our trained AI-code detection classifiers. This substantial decline in performances indicates the inability of the model to correctly identify AI-generated code when facing adversarial attacks. These results suggest that the attacks effectively disguised the AI-generated code, enabling it to bypass the classifier undetected. This underscores the challenges posed by these adversarial techniques and highlights the need for more robust detection methods.
Description
Keywords
Machine Learning, Large Language Models, LLMs, Code Stylometry, AI Code Assistants, ChatGPT, Software Supply Chain Security
Citation
Bukhari, S. A. (2024). Issues in detection of AI-generated source code (Master's thesis, University of Calgary, Calgary, Canada). Retrieved from https://prism.ucalgary.ca.