Deep Learning; Neural Network Compression; Variational Autoencoders
Résumé :
[en] Deploying deep learning neural networks on edge devices,
to accomplish task specific objectives in the real-world, requires a
reduction in their memory footprint, power consumption, and latency.
This can be realized via efficient model compression. Disentangled latent
representations produced by variational autoencoder (VAE) networks are
a promising approach for achieving model compression because they
mainly retain task-specific information, discarding useless information
for the task at hand. We make use of the Beta-VAE framework combined
with a standard criterion for pruning to investigate the impact of forcing
the network to learn disentangled representations on the pruning process
for the task of classification. In particular, we perform experiments on
MNIST and CIFAR10 datasets, examine disentanglement challenges, and
propose a path forward for future works.
Centre de recherche :
Interdisciplinary Centre for Security, Reliability and Trust (SnT) > CVI² - Computer Vision Imaging & Machine Intelligence
Disciplines :
Physique, chimie, mathématiques & sciences de la terre: Multidisciplinaire, généralités & autres Ingénierie électrique & électronique Ingénierie, informatique & technologie: Multidisciplinaire, généralités & autres
Auteur, co-auteur :
SHNEIDER, Carl ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
ROSTAMI ABENDANSARI, Peyman ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
KACEM, Anis ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
SINHA, Nilotpal ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
SHABAYEK, Abd El Rahman ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
AOUADA, Djamila ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
Co-auteurs externes :
no
Langue du document :
Anglais
Titre :
Impact of Disentanglement on Pruning Neural Networks
Date de publication/diffusion :
19 juillet 2023
Nom de la manifestation :
International Symposium on Computational Sensing (ISCS23)
Organisateur de la manifestation :
Thomas Feuillen, Amirafshar Moshtaghpour
Lieu de la manifestation :
Luxembourg, Luxembourg
Date de la manifestation :
12-06-2023 to 14-06-2023
Manifestation à portée :
International
Références de l'abstract :
Shneider, Carl, Peyman Rostami, Anis Kacem, Nilotpal Sinha, Abd El Rahman Shabayek, and Djamila Aouada. "Impact of Disentanglement on Pruning Neural Networks." arXiv preprint arXiv:2307.09994 (2023).
Focus Area :
Computational Sciences Security, Reliability and Trust