Article (Scientific journals)
AutoAdapt: On the Application of AutoML for Parameter-Efficient Fine-Tuning of Pre-Trained Code Models
AKLI, Amal; CORDY, Maxime; Papadakis, Mike et al.
2025In ACM Transactions on Software Engineering and Methodology
Peer Reviewed verified by ORBi Dataset
 

Files


Full Text
AutoAdapt.pdf
Publisher postprint (359.93 MB) Creative Commons License - Attribution
Download

All documents in ORBilu are protected by a user license.

Send to



Details



Keywords :
PEFT; pre-trained code models; Optimization; Regularized Evolution; AutoML; NAS
Abstract :
[en] Large Language Models (LLMs) have demonstrated their ability to solve tasks across various domains, including software engineering. However, their extensive number of parameters makes full fine-tuning computationally prohibitive. While Parameter-efficient fine-tuning (PEFT) methods, such as adapter fine-tuning, have been proposed to address this issue; yet, they typically employ default configurations that use the same adapter settings across all layers. Concurrently, Automated Machine Learning (AutoML) has demonstrated success in hyperparameter optimization, while Neural Architecture Search (NAS) has proven effective in optimizing neural network architectures. Building on these successes, we introduce AutoAdapt, a novel approach that leverages NAS to automatically discover task-specific, layer-wide adapter configurations, allowing each layer to adopt distinct adapter parameters. AutoAdapt defines a search space tailored for adapter-based fine-tuning and employs an evolutionary algorithm to explore a diverse range of configurations, thereby evaluating the benefits of customizing each layer individually. We evaluate AutoAdapt on well-established software engineering tasks, including vulnerability detection, code clone detection, and code search. Our empirical results demonstrate that AutoAdapt outperforms manually engineered adapter configurations, achieving up to a 5% improvement in F1-score for clone detection and defect detection, and up to a 25% improvement in MRR for code search. Additionally, it surpasses other PEFT techniques, such as Prefix Tuning and LoRA. Furthermore, AutoAdapt is capable of identifying configurations that outperform even full fine-tuning, while training less than 2.5% of the model parameters. A comprehensive analysis reveals that factors such as selective layer adaptation, module selection (e.g., attention versus feed-forward layers), normalization, and dropout significantly influence performance across different tasks. Additionally, our findings suggest the possibility of transferring adapter configurations to similar datasets and tasks, thus simplifying the search for optimal PEFT settings.
Disciplines :
Computer science
Author, co-author :
AKLI, Amal  ;  University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
CORDY, Maxime  ;  University of Luxembourg
Papadakis, Mike ;  SnT, University of Luxembourg, Luxembourg
Le Traon, Yves ;  SnT, University of Luxembourg, Luxembourg
External co-authors :
no
Language :
English
Title :
AutoAdapt: On the Application of AutoML for Parameter-Efficient Fine-Tuning of Pre-Trained Code Models
Publication date :
10 May 2025
Journal title :
ACM Transactions on Software Engineering and Methodology
ISSN :
1049-331X
Publisher :
Association for Computing Machinery (ACM)
Peer reviewed :
Peer Reviewed verified by ORBi
Development Goals :
9. Industry, innovation and infrastructure
Funding number :
23_IS_18182513_MiCE
Funding text :
This work is supported by the Luxembourg National Research Fund (FNR) through the CORE project under Grant C23_IS_18182513_MiCE
Available on ORBilu :
since 04 December 2025

Statistics


Number of views
3 (1 by Unilu)
Number of downloads
0 (0 by Unilu)

OpenCitations
 
0
OpenAlex citations
 
0

Bibliography


Similar publications



Contact ORBilu