Reference : Automated Search for Configurations of Deep Neural Network Architectures
Scientific congresses, symposiums and conference proceedings : Paper published in a book
Engineering, computing & technology : Computer science
Computational Sciences
http://hdl.handle.net/10993/39320
Automated Search for Configurations of Deep Neural Network Architectures
English
Ghamizi, Salah mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > Computer Science and Communications Research Unit (CSC) >]
Cordy, Maxime mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > >]
Papadakis, Mike mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > Computer Science and Communications Research Unit (CSC) >]
Le Traon, Yves mailto [University of Luxembourg > Faculty of Science, Technology and Communication (FSTC) > Computer Science and Communications Research Unit (CSC) >]
2019
Automated Search for Configurations of Convolutional Neural Network Architectures
Volume A Pages 119–130
Yes
International
SPLC '19: 23rd International Systems and Software Product Line Conference
Sept 09-13, 2019
[en] neural networks ; feature model ; configuration search
[en] Deep Neural Networks (DNNs) are intensively used to solve a wide variety of complex problems. Although powerful, such systems
require manual configuration and tuning. To this end, we view DNNs as configurable systems and propose an end-to-end framework
that allows the configuration, evaluation and automated search for DNN architectures. Therefore, our contribution is threefold. First,
we model the variability of DNN architectures with a Feature Model (FM) that generalizes over existing architectures. Each valid
configuration of the FM corresponds to a valid DNN model that can be built and trained. Second, we implement, on top of Tensorflow,
an automated procedure to deploy, train and evaluate the performance of a configured model. Third, we propose a method to search
for configurations and demonstrate that it leads to good DNN models. We evaluate our method by applying it on image classification
tasks (MNIST, CIFAR-10) and show that, with limited amount of computation and training, our method can identify high-performing
architectures (with high accuracy). We also demonstrate that we outperform existing state-of-the-art architectures handcrafted by ML
researchers. Our FM and framework have been released to support replication and future research.
Interdisciplinary Centre for Security, Reliability and Trust (SnT) > Security Design and Validation Research Group (SerVal)
Fonds National de la Recherche - FnR
CODEMATES
Researchers ; Professionals
http://hdl.handle.net/10993/39320
FnR ; FNR11686509 > Michail Papadakis > CODEMATES > COntinuous DEvelopment with Mutation Analysis and TESting > 01/09/2018 > 31/08/2021 > 2017

File(s) associated to this reference

Fulltext file(s):

FileCommentaryVersionSizeAccess
Open access
SPLC19___Configuration_of_Deep_Neural_Networks.pdf:qin PqperAuthor preprint1.12 MBView/Open

Bookmark and Share SFX Query

All documents in ORBilu are protected by a user license.