De Carli, LorenzoBukhari, Sufiyan Ahmed2024-09-192024-09-192024-09-17Bukhari, S. A. (2024). Issues in detection of AI-generated source code (Master's thesis, University of Calgary, Calgary, Canada). Retrieved from https://prism.ucalgary.ca.https://hdl.handle.net/1880/119794AI coding assistants hold the promise of revolutionizing software development, but they also introduce new risks. The underlying large language models (LLMs) can generate faulty or insecure code either unintentionally or through malicious attacks. Additionally, there's a potential for AI assistants to inadvertently copy copyrighted code, leading to legal issues. To mitigate risks associated with AI-generated code, it's crucial to track its origin throughout the software supply chain. This requires distinguishing between human and AI-authored code. We first conducted a study investigating the feasibility of using lexical and syntactic features for this purpose. The results of our study were promising, it showed 92% accuracy and encouraged us to enhance our stylometric methods and delve deeper in the problem of detecting AI generated code. Next, we used a larger dataset with a bigger feature set for detecting AI-generated code. Our classifiers achieved up to 93% accuracy on standardized tasks indicating that it is possible to reliably differentiate between human and AI-generated code. Subsequently, we assess the resilience of these methods against adversarial attacks by using the LLM itself as an obfuscation tool. We introduce the concepts of LLM-based obfuscation and alteration attacks, demonstrating their efficacy in evading stylometric detection. The classifiers' performances were notably impacted by both obfuscation and alteration attacks. Recall scores dipped to 58% for obfuscation and 73% for alteration compared to the scores of our trained AI-code detection classifiers. This substantial decline in performances indicates the inability of the model to correctly identify AI-generated code when facing adversarial attacks. These results suggest that the attacks effectively disguised the AI-generated code, enabling it to bypass the classifier undetected. This underscores the challenges posed by these adversarial techniques and highlights the need for more robust detection methods.enUniversity of Calgary graduate students retain copyright ownership and moral rights for their thesis. You may use this material in any way that is permitted by the Copyright Act or through licensing that has been assigned to the document. For uses that are not allowable under copyright legislation or licensing, you are required to seek permission.Machine LearningLarge Language ModelsLLMsCode StylometryAI Code AssistantsChatGPTSoftware Supply Chain SecurityEngineering--Electronics and ElectricalEducation--TechnologyIssues in Detection of AI-Generated Source Codemaster thesis