Article (Périodiques scientifiques)
A systematic literature review on the impact of AI models on the security of code generation.
NEGRI RIBALTA, Claudia Sofia; Geraud-Stewart, Rémi; SERGEEVA, Anastasia et al.
2024In Frontiers in Big Data, 7, p. 1386720
Peer reviewed vérifié par ORBi
 

Documents


Texte intégral
fdata-07-1386720-2.pdf
Postprint Auteur (599.59 kB)
Télécharger

Tous les documents dans ORBilu sont protégés par une licence d'utilisation.

Envoyer vers



Détails



Mots-clés :
artificial intelligence; code generation; programming; security; software engineering; Computer Science (miscellaneous); Information Systems; cyber security
Résumé :
[en] INTRODUCTION: Artificial Intelligence (AI) is increasingly used as a helper to develop computing programs. While it can boost software development and improve coding proficiency, this practice offers no guarantee of security. On the contrary, recent research shows that some AI models produce software with vulnerabilities. This situation leads to the question: How serious and widespread are the security flaws in code generated using AI models? METHODS: Through a systematic literature review, this work reviews the state of the art on how AI models impact software security. It systematizes the knowledge about the risks of using AI in coding security-critical software. RESULTS: It reviews what security flaws of well-known vulnerabilities (e.g., the MITRE CWE Top 25 Most Dangerous Software Weaknesses) are commonly hidden in AI-generated code. It also reviews works that discuss how vulnerabilities in AI-generated code can be exploited to compromise security and lists the attempts to improve the security of such AI-generated code. DISCUSSION: Overall, this work provides a comprehensive and systematic overview of the impact of AI in secure coding. This topic has sparked interest and concern within the software security engineering community. It highlights the importance of setting up security measures and processes, such as code verification, and that such practices could be customized for AI-aided code production.
Centre de recherche :
NCER-FT - FinTech National Centre of Excellence in Research
Disciplines :
Sciences informatiques
Auteur, co-auteur :
NEGRI RIBALTA, Claudia Sofia  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > IRiSC
Geraud-Stewart, Rémi;  École Normale Supérieure, Paris, France
SERGEEVA, Anastasia  ;  University of Luxembourg > Faculty of Humanities, Education and Social Sciences (FHSE) > Department of Behavioural and Cognitive Sciences (DBCS) > Lifespan Development, Family and Culture
LENZINI, Gabriele  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > IRiSC
Co-auteurs externes :
yes
Langue du document :
Anglais
Titre :
A systematic literature review on the impact of AI models on the security of code generation.
Date de publication/diffusion :
2024
Titre du périodique :
Frontiers in Big Data
eISSN :
2624-909X
Maison d'édition :
Frontiers Media SA, Suisse
Volume/Tome :
7
Pagination :
1386720
Peer reviewed :
Peer reviewed vérifié par ORBi
Focus Area :
Computational Sciences
Projet FnR :
NCER22/IS/16570468/NCER-FT
Subventionnement (détails) :
The author(s) declare financial support was received for the research, authorship, and/or publication of this article. This research was funded in whole, or in part, by the Luxembourg National Research Fund (FNR), grant: NCER22/IS/16570468/NCER-FT.
Disponible sur ORBilu :
depuis le 10 octobre 2024

Statistiques


Nombre de vues
209 (dont 18 Unilu)
Nombre de téléchargements
158 (dont 5 Unilu)

citations Scopus®
 
15
citations Scopus®
sans auto-citations
14
OpenCitations
 
0
citations OpenAlex
 
14
citations WoS
 
8

Bibliographie


Publications similaires



Contacter ORBilu