Reference : Evolving a Deep Neural Network Training Time Estimator
Scientific congresses, symposiums and conference proceedings : Paper published in a journal
Engineering, computing & technology : Computer science
Computational Sciences
http://hdl.handle.net/10993/42856
Evolving a Deep Neural Network Training Time Estimator
English
Pinel, Frédéric mailto [University of Luxembourg > Faculty of Science, Technology and Communication (FSTC) > Computer Science and Communications Research Unit (CSC) >]
Yin, Jian-xiong mailto []
Hundt, Christian mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > >]
Kieffer, Emmanuel mailto [University of Luxembourg > Faculty of Science, Technology and Communication (FSTC) > Computer Science and Communications Research Unit (CSC) >]
Varrette, Sébastien mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > Computer Science and Communications Research Unit (CSC) >]
Bouvry, Pascal mailto [University of Luxembourg > Faculty of Science, Technology and Communication (FSTC) > Computer Science and Communications Research Unit (CSC) >]
See, Simon mailto []
Feb-2020
Communications in Computer and Information Science
Springer
Yes
No
International
1865-0929
Berlin
Germany
Third International Conference on Optimization and Learning (OLA)
from 17-02-2020 to 19-02-2020
University of Cadiz
Cadiz
Spain
[en] We present a procedure for the design of a Deep Neural Net-
work (DNN) that estimates the execution time for training a deep neural network per batch on GPU accelerators. The estimator is destined to be embedded in the scheduler of a shared GPU infrastructure, capable of providing estimated training times for a wide range of network architectures, when the user submits a training job. To this end, a very short and simple representation for a given DNN is chosen. In order to compensate for the limited degree of description of the basic network representation, a novel co-evolutionary approach is taken to fit the estimator. The training set for the estimator, i.e. DNNs, is evolved by an evolutionary algorithm that optimizes the accuracy of the estimator. In the process, the genetic algorithm evolves DNNs, generates Python-Keras programs and projects them onto the simple representation. The genetic operators are dynamic, they change with the estimator’s accuracy in order to balance accuracy with generalization. Results show that despite the low degree of information in the representation and the simple initial design for the predictor, co-evolving the training set performs better than near random generated population of DNNs.
http://hdl.handle.net/10993/42856

File(s) associated to this reference

Fulltext file(s):

FileCommentaryVersionSizeAccess
Open access
Pinel2020_Chapter_EvolvingADeepNeuralNetworkTrai.pdfPublisher postprint301.82 kBView/Open

Bookmark and Share SFX Query

All documents in ORBilu are protected by a user license.