Reference : Towards Exploring the Limitations of Active Learning: An Empirical Study
Scientific congresses, symposiums and conference proceedings : Paper published in a book
Engineering, computing & technology : Computer science
Security, Reliability and Trust
http://hdl.handle.net/10993/48351
Towards Exploring the Limitations of Active Learning: An Empirical Study
English
Hu, Qiang mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SerVal >]
Guo, Yuejun mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SerVal >]
Cordy, Maxime mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SerVal >]
Xiaofei, Xie []
Ma, Wei mailto [University of Luxembourg > Faculty of Science, Technology and Medecine (FSTM) > >]
Papadakis, Mike mailto [University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS) >]
Le Traon, Yves mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SerVal >]
2021
The 36th IEEE/ACM International Conference on Automated Software Engineering.
Yes
The 36th IEEE/ACM International Conference on Automated Software Engineering (ASE 2021)
from 15-11-2021 to 19-11-2021
[en] deep learning ; data selection ; active learning ; empirical study
[en] Deep neural networks (DNNs) are being increasingly deployed as integral parts of software systems. However, due to the complex interconnections among hidden layers and massive hyperparameters, DNNs require being trained using a large number of labeled inputs, which calls for extensive human effort for collecting and labeling data. Spontaneously, to alleviate this growing demand, a surge of state-of-the-art studies comes up with different metrics to select a small yet informative dataset for the model training. These research works have demonstrated that DNN models can achieve competitive performance using a carefully selected small set of data. However, the literature lacks proper investigation of the limitations of data selection metrics, which is crucial to apply them in practice. In this paper, we fill this gap and conduct an extensive empirical study to explore the limits of selection metrics. Our study involves 15 selection metrics evaluated over 5 datasets (2 image classification tasks and 3 text classification tasks), 10 DNN architectures, and 20 labeling budgets (ratio of training data being labeled). Our findings reveal that, while selection metrics are usually effective in producing accurate models, they may induce a loss of model robustness (against adversarial examples) and resilience to compression. Overall, we demonstrate the existence of a trade-off between labeling effort and different model qualities. This paves the way for future research in devising selection metrics considering multiple quality criteria.
FNR
CORE project C18/IS/12669767/ STELLAR/LeTraon
http://hdl.handle.net/10993/48351
FnR ; FNR12669767 > Yves Le Traon > STELLAR > Testing Self-learning Systems > 01/09/2019 > 31/08/2022 > 2018

File(s) associated to this reference

Fulltext file(s):

FileCommentaryVersionSizeAccess
Open access
ASE2021-ALempirical.pdfAuthor postprint2.01 MBView/Open

Bookmark and Share SFX Query

All documents in ORBilu are protected by a user license.