Neural Network Compression; Deep Learning; Edge Devices
Abstract :
[en] Efficient model compression techniques are required to deploy deep neural networks (DNNs) on edge devices for task specific objectives. A variational autoencoder (VAE) framework is combined with a pruning criterion to investigate the impact of having the network learn disentangled representations on the pruning process for the classification task.
Research center :
Interdisciplinary Centre for Security, Reliability and Trust (SnT) > CVI² - Computer Vision Imaging & Machine Intelligence