Article (Scientific journals)
A comprehensive assessment benchmark for rigorously evaluating deep learning image classifiers.
SPRATLING, Michael
2025In Neural Networks, 192, p. 107801
Peer Reviewed verified by ORBi
 

Files


Full Text
robustness_evaluation.pdf
Author preprint (2.16 MB) Creative Commons License - Attribution, Non-Commercial, ShareAlike
Request a copy

All documents in ORBilu are protected by a user license.

Send to



Details



Keywords :
Adversarial training; Classification; Common corruptions; Data augmentation; Deep learning; Generalisation; Neural networks; Open-set; Out-of-distribution; Robustness; Artificial Intelligence
Abstract :
[en] Reliable and robust evaluation methods are a necessary first step towards developing machine learning models that are themselves robust and reliable. Unfortunately, current evaluation protocols typically used to assess classifiers fail to comprehensively evaluate performance as they tend to rely on limited types of test data, and ignore others. For example, using the standard test data fails to evaluate the predictions made by the classifier to samples from classes it was not trained on. On the other hand, testing with data containing samples from unknown classes fails to evaluate how well the classifier can predict the labels for known classes. This article advocates benchmarking performance using a wide range of different types of data and using a single metric that can be applied to all such data types to produce a consistent evaluation of performance. Using the proposed benchmark it is found that current deep neural networks, including those trained with methods that are believed to produce state-of-the-art robustness, are vulnerable to making mistakes on certain types of data. This means that such models will be unreliable in real-world scenarios where they may encounter data from many different domains, and that they are insecure as they can be easily fooled into making the wrong decisions. It is hoped that these results will motivate the wider adoption of more comprehensive testing methods that will, in turn, lead to the development of more robust machine learning methods in the future. Code: https://codeberg.org/mwspratling/RobustnessEvaluation.
Disciplines :
Computer science
Author, co-author :
SPRATLING, Michael  ;  University of Luxembourg > Faculty of Humanities, Education and Social Sciences (FHSE) > Department of Behavioural and Cognitive Sciences (DBCS) > Cognitive Science and Assessment
External co-authors :
no
Language :
English
Title :
A comprehensive assessment benchmark for rigorously evaluating deep learning image classifiers.
Publication date :
18 July 2025
Journal title :
Neural Networks
ISSN :
0893-6080
eISSN :
1879-2782
Publisher :
Elsevier Ltd, United States
Volume :
192
Pages :
107801
Peer reviewed :
Peer Reviewed verified by ORBi
Available on ORBilu :
since 21 August 2025

Statistics


Number of views
33 (0 by Unilu)
Number of downloads
0 (0 by Unilu)

Scopus citations®
 
0
Scopus citations®
without self-citations
0
OpenCitations
 
0
OpenAlex citations
 
0
WoS citations
 
0

Bibliography


Similar publications



Contact ORBilu