Article (Scientific journals)
A systematic literature review on the impact of AI models on the security of code generation.
NEGRI RIBALTA, Claudia Sofia; Geraud-Stewart, Rémi; SERGEEVA, Anastasia et al.
2024In Frontiers in Big Data, 7, p. 1386720
Peer Reviewed verified by ORBi
 

Files


Full Text
fdata-07-1386720-2.pdf
Author postprint (599.59 kB)
Download

All documents in ORBilu are protected by a user license.

Send to



Details



Keywords :
artificial intelligence; code generation; programming; security; software engineering; Computer Science (miscellaneous); Information Systems; cyber security
Abstract :
[en] INTRODUCTION: Artificial Intelligence (AI) is increasingly used as a helper to develop computing programs. While it can boost software development and improve coding proficiency, this practice offers no guarantee of security. On the contrary, recent research shows that some AI models produce software with vulnerabilities. This situation leads to the question: How serious and widespread are the security flaws in code generated using AI models? METHODS: Through a systematic literature review, this work reviews the state of the art on how AI models impact software security. It systematizes the knowledge about the risks of using AI in coding security-critical software. RESULTS: It reviews what security flaws of well-known vulnerabilities (e.g., the MITRE CWE Top 25 Most Dangerous Software Weaknesses) are commonly hidden in AI-generated code. It also reviews works that discuss how vulnerabilities in AI-generated code can be exploited to compromise security and lists the attempts to improve the security of such AI-generated code. DISCUSSION: Overall, this work provides a comprehensive and systematic overview of the impact of AI in secure coding. This topic has sparked interest and concern within the software security engineering community. It highlights the importance of setting up security measures and processes, such as code verification, and that such practices could be customized for AI-aided code production.
Research center :
NCER-FT - FinTech National Centre of Excellence in Research
Disciplines :
Computer science
Author, co-author :
NEGRI RIBALTA, Claudia Sofia  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > IRiSC
Geraud-Stewart, Rémi;  École Normale Supérieure, Paris, France
SERGEEVA, Anastasia  ;  University of Luxembourg > Faculty of Humanities, Education and Social Sciences (FHSE) > Department of Behavioural and Cognitive Sciences (DBCS) > Lifespan Development, Family and Culture
LENZINI, Gabriele ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > IRiSC
External co-authors :
yes
Language :
English
Title :
A systematic literature review on the impact of AI models on the security of code generation.
Publication date :
2024
Journal title :
Frontiers in Big Data
eISSN :
2624-909X
Publisher :
Frontiers Media SA, Switzerland
Volume :
7
Pages :
1386720
Peer reviewed :
Peer Reviewed verified by ORBi
Focus Area :
Computational Sciences
FnR Project :
NCER22/IS/16570468/NCER-FT
Funding text :
The author(s) declare financial support was received for the research, authorship, and/or publication of this article. This research was funded in whole, or in part, by the Luxembourg National Research Fund (FNR), grant: NCER22/IS/16570468/NCER-FT.
Available on ORBilu :
since 10 October 2024

Statistics


Number of views
203 (18 by Unilu)
Number of downloads
149 (5 by Unilu)

Scopus citations®
 
15
Scopus citations®
without self-citations
14
OpenCitations
 
0
OpenAlex citations
 
14
WoS citations
 
8

Bibliography


Similar publications



Contact ORBilu