[en] This paper investigates the limitations of the normalization in attention mechanisms. Webegin with a theoretical framework that enables the identification of the model’s selective ability and the geometric separation involved in token selection. Our analysis includes explicit bounds on distances and separation criteria for token vectors under softmax scaling. Through experiments with pre-trained GPT-2 model, we empirically validate our theoretical results and analyze key behaviors of the attention mechanism. Notably, we demonstrate that as the number of selected tokens increases, the model’s ability to distinguish informative tokens declines, often converging toward a uniform selection pattern. We also show that gradient sensitivity under softmax normalization presents challenges during training, especially at low temperature settings. These findings advance current understanding of softmax-based attention mechanism and motivate the need for more robust normalization and selection strategies in future attention architectures.
Disciplines :
Computer science
Author, co-author :
MUDARISOV, Timur ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SEDAN
Burtsev Mikhail
PETROVA, Tatiana ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SEDAN
STATE, Radu ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SEDAN
External co-authors :
yes
Language :
English
Title :
Limitations of Normalization in Attention Mechanism
Publication date :
20 October 2025
Number of pages :
17
Event name :
NeurIPS
Event organizer :
The Thirty-Ninth Annual Conference on Neural Information Processing Systems