S. Anwar, K. Hwang, and W. Sung. Structured pruning of deep convolutional neural networks. ACM Journal on Emerging Technologies in Computing Systems (JETC), 13 (3): 32, 2017.
M. Cogswell, F. Ahmed, R. Girshick, L. Zitnick, and D. Batra. Reducing overfitting in deep networks by decorrelating representations. In International Conference on Learning Representations (ICLR), 2016.
X. Ding, G. Ding, J. Han, and S. Tang. Auto-balanced filter pruning for efficient convolutional neural networks. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
B. Efron, T. Hastie, I. Johnstone, R. Tibshirani, et al. Least angle regression. The Annals of statistics, 32 (2): 407-499, 2004.
Y. Guo, A. Yao, and Y. Chen. Dynamic network surgery for efficient dnns. In Advances In Neural Information Processing Systems, pages 1379-1387, 2016.
S. Han, J. Pool, J. Tran, andW. Dally. Learning both weights and connections for efficient neural network. In Advances in neural information processing systems, pages 1135-1143, 2015.
Y. He, G. Kang, X. Dong, Y. Fu, and Y. Yang. Soft filter pruning for accelerating deep convolutional neural networks. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, pages 2234-2240. AAAI Press, 2018.
Y. He, J. Lin, Z. Liu, H. Wang, L.-J. Li, and S. Han. Amc: Automl for model compression and acceleration on mobile devices. In Proceedings of the European Conference on Computer Vision (ECCV), pages 784-800, 2018.
Y. He, X. Zhang, and J. Sun. Channel pruning for accelerating very deep neural networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 1389-1397, 2017.
J. Huang, P. Breheny, and S. Ma. A selective review of group selection in high-dimensional models. Statistical science: a review journal of the Institute of Mathematical Statistics, 27 (4), 2012.
Q. Huang, K. Zhou, S. You, and U. Neumann. Learning to prune filters in convolutional neural networks. In 2018 IEEEWinter Conference on Applications of Computer Vision (WACV), pages 709-718. IEEE, 2018.
V. Jayasundara, S. Jayasekara, H. Jayasekara, J. Rajasegaran, S. Seneviratne, and R. Rodrigo. Textcaps: Handwritten character recognition with very small datasets. In 2019 IEEEWinter Conference on Applications of Computer Vision (WACV), pages 254-262. IEEE, 2019.
M. M. Kabir, M. M. Islam, and K. Murase. A new wrapper feature selection approach using neural network. Neurocomputing, 73 (16-18): 3273-3283, 2010.
A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097-1105, 2012.
D. Li, X. Wang, and D. Kong. Deeprebirth: Accelerating deep neural network execution on mobile devices. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. P. Graf. Pruning filters for efficient convnets. In International Conference on Learning Representations (ICLR), 2017.
T. Li, B. Wu, Y. Yang, Y. Fan, Y. Zhang, and W. Liu. Compressing convolutional neural networks via factorized convolutional filters. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3977-3986, 2019.
B. Liu, M. Wang, H. Foroosh, M. Tappen, and M. Pensky. Sparse convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 806-814, 2015.
J.-H. Luo, J. Wu, and W. Lin. Thinet: A filter level pruning method for deep neural network compression. In Proceedings of the IEEE international conference on computer vision, pages 5058-5066, 2017.
N. Meinshausen. Relaxed lasso. Computational Statistics & Data Analysis, 52 (1): 374-393, 2007.
M.-E. Nilsback and A. Zisserman. Automated flower classification over a large number of classes. In 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing, pages 722-729. IEEE, 2008.
O. K. Oyedotun, G. Demisse, A. El Rahman Shabayek, D. Aouada, and B. Ottersten. Facial expression recognition via joint deep learning of rgb-depth map latent representations. In Proceedings of the IEEE International Conference on Computer Vision, pages 3161-3168, 2017.
O. K. Oyedotun, A. El Rahman Shabayek, D. Aouada, and B. Ottersten. Highway network block with gates constraints for training very deep networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 1658-1667, 2018.
T. Poggio and F. Girosi. Regularization algorithms for learning that are equivalent to multilayer networks. Science, 247 (4945): 978-982, 1990.
W. Ren, L. Ma, J. Zhang, J. Pan, X. Cao, W. Liu, and M.-H. Yang. Gated fusion network for single image dehazing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3253-3261, 2018.
S. Rosset, J. Zhu, and T. Hastie. Boosting as a regularized path to a maximum margin classifier. Journal of Machine Learning Research, 5 (Aug): 941-973, 2004.
S. Scardapane, D. Comminiello, A. Hussain, and A. Uncini. Group sparse regularization for deep neural networks. Neurocomputing, 241: 81-89, 2017.
K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations (ICLR), 2015.
P. Singh, V. S. R. Kadi, N. Verma, and V. P. Namboodiri. Stability based filter pruning for accelerating deep cnns. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1166-1174. IEEE, 2019.
A. Singla, L. Yuan, and T. Ebrahimi. Food/non-food image classification and food categorization using pre-trained googlenet model. In Proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management, pages 3-11. ACM, 2016.
S. Srinivas and R. V. Babu. Data-free parameter pruning for deep neural networks. In British Machine Vision Conference (BMVC), 2015.
S. Srinivas, A. Subramanya, and R. Venkatesh Babu. Training sparse neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 138-145, 2017.
R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological), 58 (1): 267-288, 1996.
H. Wang, B. J. Lengerich, B. Aragam, and E. P. Xing. Precision lasso: accounting for correlations and linear dependencies in high-dimensional genomic data. Bioinformatics, 2018.
W. Wen, C. Wu, Y. Wang, Y. Chen, and H. Li. Learning structured sparsity in deep neural networks. In Advances in neural information processing systems, pages 2074-2082, 2016.
Y. Wu, X. Qin, Y. Pan, and C. Yuan. Convolution neural network based transfer learning for classification of flowers. In 2018 IEEE 3rd International Conference on Signal and Image Processing (ICSIP), pages 562-566. IEEE, 2018.
H. Xu, C. Caramanis, and S. Mannor. Sparse algorithms are not stable: A no-free-lunch theorem. IEEE transactions on pattern analysis and machine intelligence, 34 (1): 187-193, 2011.
R. Yu, A. Li, C.-F. Chen, J.-H. Lai, V. I. Morariu, X. Han, M. Gao, C.-Y. Lin, and L. S. Davis. Nisp: Pruning networks using neuron importance score propagation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9194-9203, 2018.
M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68 (1): 49-67, 2006.
M. Yunus, A. Saefuddin, and A. M. Soleh. Characteristics of group lasso in handling high correlated data. Applied Mathematical Sciences, 11 (20): 953-961, 2017.
S. Zagoruyko and N. Komodakis. Wide residual networks. In British Machine Vision Conference (BMVC), volume 8, pages 35-67, 2012.
P. Zhao and B. Yu. On model selection consistency of lasso. Journal of Machine learning research, 7 (Nov): 2541-2563, 2006.
H. Zhou, J. M. Alvarez, and F. Porikli. Less is more: Towards compact cnns. In European Conference on Computer Vision, pages 662-677. Springer, 2016.
Z. Zhuang, M. Tan, B. Zhuang, J. Liu, Y. Guo, Q. Wu, J. Huang, and J. Zhu. Discrimination-aware channel pruning for deep neural networks. In Advances in Neural Information Processing Systems, pages 875-886, 2018.
Z. Zhuang, M. Tan, B. Zhuang, J. Liu, Y. Guo, Q. Wu, J. Huang, and J. Zhu. Discrimination-aware channel pruning for deep neural networks. In Advances in Neural Information Processing Systems, pages 875-886, 2018.
H. Zou and T. Hastie. Regularization and variable selection via the elastic net. Journal of the royal statistical society: series B (statistical methodology), 67 (2): 301-320, 2005.