Eprint already available on another site (E-prints, Working papers and Research blog)
Leave it to the Specialist: Repair Sparse LLMs with Sparse Fine-Tuning via Sparsity Evolution
Xiao, Qiao; Ansell, Alan; WU, Boqian et al.
2025
 

Files


Full Text
2505.24037v1.pdf
Author postprint (4.18 MB)
Download

All documents in ORBilu are protected by a user license.

Send to



Details



Keywords :
Large language models; Fine-tuning; Model pruning; Dynamic sparse training; Efficient fine-tuning; Parameter efficient
Abstract :
[en] Large language models (LLMs) have achieved remarkable success across various tasks but face deployment challenges due to their massive computational demands. While post-training pruning methods like SparseGPT and Wanda can effectively reduce the model size, but struggle to maintain model performance at high sparsity levels, limiting their utility for downstream tasks. Existing fine-tuning methods, such as full fine-tuning and LoRA, fail to preserve sparsity as they require updating the whole dense metrics, not well-suited for sparse LLMs. In this paper, we propose Sparsity Evolution Fine-Tuning (SEFT), a novel method designed specifically for sparse LLMs. SEFT dynamically evolves the sparse topology of pruned models during fine-tuning, while preserving the overall sparsity throughout the process. The strengths of SEFT lie in its ability to perform task-specific adaptation through a weight drop-and-grow strategy, enabling the pruned model to self-adapt its sparse connectivity pattern based on the target dataset. Furthermore, a sensitivity-driven pruning criterion is employed to ensure that the desired sparsity level is consistently maintained throughout fine-tuning. Our experiments on various LLMs, including LLaMA families, DeepSeek, and Mistral, across a diverse set of benchmarks demonstrate that SEFT achieves stronger performance while offering superior memory and time efficiency compared to existing baselines. Our code is publicly available at: https://github.com/QiaoXiao7282/SEFT.
Disciplines :
Computer science
Author, co-author :
Xiao, Qiao;  Eindhoven University of Technology, Netherlands
Ansell, Alan;  University of Cambridge
WU, Boqian ;  University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS) ; University of Twente, Netherlands
Yin, Lu;  University of Surrey ; Eindhoven University of Technology, Netherlands
Pechenizkiy, Mykola;  Eindhoven University of Technology, Netherlands
Liu, Shiwei;  University of Oxford ; Eindhoven University of Technology, Netherlands
MOCANU, Decebal Constantin  ;  University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS) ; Eindhoven University of Technology, Netherlands
Language :
English
Title :
Leave it to the Specialist: Repair Sparse LLMs with Sparse Fine-Tuning via Sparsity Evolution
Publication date :
29 May 2025
Focus Area :
Computational Sciences
Development Goals :
9. Industry, innovation and infrastructure
Available on ORBilu :
since 02 February 2026

Statistics


Number of views
5 (0 by Unilu)
Number of downloads
3 (0 by Unilu)

Bibliography


Similar publications



Contact ORBilu