Issues in Detection of AI-Generated Source Code

dc.contributor.advisorDe Carli, Lorenzo
dc.contributor.authorBukhari, Sufiyan Ahmed
dc.contributor.committeememberTan, Peng Seng Benjamin
dc.contributor.committeememberAbdel Latif, Ahmad Hazzaa Khader
dc.date2024-11
dc.date.accessioned2024-09-19T14:40:19Z
dc.date.available2024-09-19T14:40:19Z
dc.date.issued2024-09-17
dc.description.abstractAI coding assistants hold the promise of revolutionizing software development, but they also introduce new risks. The underlying large language models (LLMs) can generate faulty or insecure code either unintentionally or through malicious attacks. Additionally, there's a potential for AI assistants to inadvertently copy copyrighted code, leading to legal issues. To mitigate risks associated with AI-generated code, it's crucial to track its origin throughout the software supply chain. This requires distinguishing between human and AI-authored code. We first conducted a study investigating the feasibility of using lexical and syntactic features for this purpose. The results of our study were promising, it showed 92% accuracy and encouraged us to enhance our stylometric methods and delve deeper in the problem of detecting AI generated code. Next, we used a larger dataset with a bigger feature set for detecting AI-generated code. Our classifiers achieved up to 93% accuracy on standardized tasks indicating that it is possible to reliably differentiate between human and AI-generated code. Subsequently, we assess the resilience of these methods against adversarial attacks by using the LLM itself as an obfuscation tool. We introduce the concepts of LLM-based obfuscation and alteration attacks, demonstrating their efficacy in evading stylometric detection. The classifiers' performances were notably impacted by both obfuscation and alteration attacks. Recall scores dipped to 58% for obfuscation and 73% for alteration compared to the scores of our trained AI-code detection classifiers. This substantial decline in performances indicates the inability of the model to correctly identify AI-generated code when facing adversarial attacks. These results suggest that the attacks effectively disguised the AI-generated code, enabling it to bypass the classifier undetected. This underscores the challenges posed by these adversarial techniques and highlights the need for more robust detection methods.
dc.identifier.citationBukhari, S. A. (2024). Issues in detection of AI-generated source code (Master's thesis, University of Calgary, Calgary, Canada). Retrieved from https://prism.ucalgary.ca.
dc.identifier.urihttps://hdl.handle.net/1880/119794
dc.language.isoen
dc.publisher.facultyGraduate Studies
dc.publisher.institutionUniversity of Calgary
dc.rightsUniversity of Calgary graduate students retain copyright ownership and moral rights for their thesis. You may use this material in any way that is permitted by the Copyright Act or through licensing that has been assigned to the document. For uses that are not allowable under copyright legislation or licensing, you are required to seek permission.
dc.subjectMachine Learning
dc.subjectLarge Language Models
dc.subjectLLMs
dc.subjectCode Stylometry
dc.subjectAI Code Assistants
dc.subjectChatGPT
dc.subjectSoftware Supply Chain Security
dc.subject.classificationEngineering--Electronics and Electrical
dc.subject.classificationEducation--Technology
dc.titleIssues in Detection of AI-Generated Source Code
dc.typemaster thesis
thesis.degree.disciplineEngineering – Electrical & Computer
thesis.degree.grantorUniversity of Calgary
thesis.degree.nameMaster of Science (MSc)
ucalgary.thesis.accesssetbystudentI do not require a thesis withhold – my thesis will have open access and can be viewed and downloaded publicly as soon as possible.
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
ucalgary_2024_bukhari_sufiyan.pdf
Size:
2.02 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.62 KB
Format:
Item-specific license agreed upon to submission
Description: