Reference : Empirical assessment of machine learning-based malware detectors for Android: Measuri...
Scientific journals : Article
Engineering, computing & technology : Computer science
http://hdl.handle.net/10993/20068
Empirical assessment of machine learning-based malware detectors for Android: Measuring the Gap between In-the-Lab and In-the-Wild Validation Scenarios
English
Allix, Kevin mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > Computer Science and Communications Research Unit (CSC)]
Bissyande, Tegawendé François D Assise mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > >]
Jerome, Quentin [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > >]
Klein, Jacques mailto [University of Luxembourg > Faculty of Science, Technology and Communication (FSTC) > Computer Science and Communications Research Unit (CSC)]
State, Radu mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) >]
Le Traon, Yves mailto [University of Luxembourg > Faculty of Science, Technology and Communication (FSTC) > Computer Science and Communications Research Unit (CSC)]
24-Dec-2014
Empirical Software Engineering
Springer US
1-29
Yes (verified by ORBilu)
International
1382-3256
[en] Machine learning ; Ten-Fold ; Malware ; Android
[en] To address the issue of malware detection through large sets of applications, researchers have recently started to investigate the capabilities of machine-learning techniques for proposing effective approaches. So far, several promising results were recorded in the literature, many approaches being assessed with what we call in the lab validation scenarios. This paper revisits the purpose of malware detection to discuss whether such in the lab validation scenarios provide reliable indications on the performance of malware detectors in real-world settings, aka in the wild.
To this end, we have devised several Machine Learning classifiers that rely on a set of features built from applications’ CFGs. We use a sizeable dataset of over 50 000 Android applications collected from sources where state-of-the art approaches have selected their data. We show that, in the lab, our approach outperforms existing machine
learning-based approaches. However, this high performance does not translate in high performance in the wild. The performance gap we observed—F-measures dropping from over 0.9 in the lab to below 0.1 in the wild —raises one important question: How do state-of-the-art approaches perform in the wild ?
University of Luxembourg: High Performance Computing - ULHPC
http://hdl.handle.net/10993/20068
10.1007/s10664-014-9352-6
http://dx.doi.org/10.1007/s10664-014-9352-6

File(s) associated to this reference

Fulltext file(s):

FileCommentaryVersionSizeAccess
Open access
emse-in_the_lab-vs-in_the_wild.pdfAuthor preprint1.14 MBView/Open

Bookmark and Share SFX Query

All documents in ORBilu are protected by a user license.