Computer Science - Computer Vision and Pattern Recognition; Sparse Neural Networks; Sparse Training
Abstract :
[en] Transformers have quickly shined in the computer vision world since the
emergence of Vision Transformers (ViTs). The dominant role of convolutional
neural networks (CNNs) seems to be challenged by increasingly effective
transformer-based models. Very recently, a couple of advanced convolutional
models strike back with large kernels motivated by the local-window attention
mechanism, showing appealing performance and efficiency. While one of them,
i.e. RepLKNet, impressively manages to scale the kernel size to 31x31 with
improved performance, the performance starts to saturate as the kernel size
continues growing, compared to the scaling trend of advanced ViTs such as Swin
Transformer. In this paper, we explore the possibility of training extreme
convolutions larger than 31x31 and test whether the performance gap can be
eliminated by strategically enlarging convolutions. This study ends up with a
recipe for applying extremely large kernels from the perspective of sparsity,
which can smoothly scale up kernels to 61x61 with better performance. Built on
this recipe, we propose Sparse Large Kernel Network (SLaK), a pure CNN
architecture equipped with sparse factorized 51x51 kernels that can perform on
par with or better than state-of-the-art hierarchical Transformers and modern
ConvNet architectures like ConvNeXt and RepLKNet, on ImageNet classification as
well as a wide range of downstream tasks including semantic segmentation on
ADE20K, object detection on PASCAL VOC 2007, and object detection/segmentation
on MS COCO.
Disciplines :
Computer science
Author, co-author :
Liu, Shiwei; UT Austin - University of Texas at Austin [US-TX] ; Eindhoven University of Technology [NL]
Chen, Tianlong; UT Austin - University of Texas at Austin [US-TX]
Chen, Xiaohan; UT Austin - University of Texas at Austin [US-TX]
Chen, Xuxi; UT Austin - University of Texas at Austin [US-TX]
Xiao, Qiao; Eindhoven University of Technology [NL]
Wu, Boqian; University of Twente [NL]
Kärkkäinen, Tommi; University of Jyväskylä [FI]
Pechenizkiy, Mykola; Eindhoven University of Technology [NL]
MOCANU, Decebal Constantin ; University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS) ; Eindhoven University of Technology [NL] > Mathematics and Computer Science ; University of Twente [NL] > Computer Science
Wang, Zhangyang; UT Austin - University of Texas at Austin [US-TX]
External co-authors :
yes
Language :
English
Title :
More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity
Publication date :
01 February 2023
Event name :
ICLR 2023: The Eleventh International Conference on Learning Representations