Results 1-20 of 100.
((uid:50002811))

Bookmark and Share    
Full Text
Peer Reviewed
See detailSelecting Fault Revealing Mutants
Titcheu Chekam, Thierry UL; Papadakis, Mike UL; Bissyande, Tegawendé François D Assise UL et al

in Empirical Software Engineering (in press)

Detailed reference viewed: 231 (27 UL)
Full Text
See detailTowards Generalizable Machine Learning for Chest X-ray Diagnosis with Multi-task learning
Ghamizi, Salah UL; Garcia Santa Cruz, Beatriz UL; Temple, Paul et al

E-print/Working paper (2022)

Clinicians use chest radiography (CXR) to diagnose common pathologies. Automated classification of these diseases can expedite analysis workflow, scale to growing numbers of patients and reduce healthcare ... [more ▼]

Clinicians use chest radiography (CXR) to diagnose common pathologies. Automated classification of these diseases can expedite analysis workflow, scale to growing numbers of patients and reduce healthcare costs. While research has produced classification models that perform well on a given dataset, the same models lack generalization on different datasets. This reduces confidence that these models can be reliably deployed across various clinical settings. We propose an approach based on multitask learning to improve model generalization. We demonstrate that learning a (main) pathology together with an auxiliary pathology can significantly impact generalization performance (between -10% and +15% AUC-ROC). A careful choice of auxiliary pathology even yields competitive performance with state-of-the-art models that rely on fine-tuning or ensemble learning, using between 6% and 34% of the training data that these models required. We, further, provide a method to determine what is the best auxiliary task to choose without access to the target dataset. Ultimately, our work makes a big step towards the creation of CXR diagnosis models applicable in the real world, through the evidence that multitask learning can drastically improve generalization. [less ▲]

Detailed reference viewed: 62 (9 UL)
Full Text
Peer Reviewed
See detailµBert: Mutation Testing using Pre-Trained Language Models
Degiovanni, Renzo Gaston UL; Papadakis, Mike UL

in Degiovanni, Renzo Gaston; Papadakis, Mike (Eds.) µBert: Mutation Testing using Pre-Trained Language Models (2022)

We introduce µBert, a mutation testing tool that uses a pre-trained language model (CodeBERT) to generate mutants. This is done by masking a token from the expression given as input and using CodeBERT to ... [more ▼]

We introduce µBert, a mutation testing tool that uses a pre-trained language model (CodeBERT) to generate mutants. This is done by masking a token from the expression given as input and using CodeBERT to predict it. Thus, the mutants are generated by replacing the masked tokens with the predicted ones. We evaluate µBert on 40 real faults from Defects4J and show that it can detect 27 out of the 40 faults, while the baseline (PiTest) detects 26 of them. We also show that µBert can be 2 times more cost-effective than PiTest, when the same number of mutants are analysed. Additionally, we evaluate the impact of µBert's mutants when used by program assertion inference techniques, and show that they can help in producing better specifications. Finally, we discuss about the quality and naturalness of some interesting mutants produced by µBert during our experimental evaluation. [less ▲]

Detailed reference viewed: 21 (0 UL)
Full Text
Peer Reviewed
See detailAn Empirical Study on Data Distribution-Aware Test Selection for Deep Learning Enhancement
Hu, Qiang UL; Guo, Yuejun UL; Cordy, Maxime UL et al

in ACM Transactions on Software Engineering and Methodology (2022)

Similar to traditional software that is constantly under evolution, deep neural networks (DNNs) need to evolve upon the rapid growth of test data for continuous enhancement, e.g., adapting to distribution ... [more ▼]

Similar to traditional software that is constantly under evolution, deep neural networks (DNNs) need to evolve upon the rapid growth of test data for continuous enhancement, e.g., adapting to distribution shift in a new environment for deployment. However, it is labor-intensive to manually label all the collected test data. Test selection solves this problem by strategically choosing a small set to label. Via retraining with the selected set, DNNs will achieve competitive accuracy. Unfortunately, existing selection metrics involve three main limitations: 1) using different retraining processes; 2) ignoring data distribution shifts; 3) being insufficiently evaluated. To fill this gap, we first conduct a systemically empirical study to reveal the impact of the retraining process and data distribution on model enhancement. Then based on our findings, we propose a novel distribution-aware test (DAT) selection metric. Experimental results reveal that retraining using both the training and selected data outperforms using only the selected data. None of the selection metrics perform the best under various data distributions. By contrast, DAT effectively alleviates the impact of distribution shifts and outperforms the compared metrics by up to 5 times and 30.09% accuracy improvement for model enhancement on simulated and in-the-wild distribution shift scenarios, respectively. [less ▲]

Detailed reference viewed: 212 (56 UL)
Full Text
Peer Reviewed
See detailEvasion Attack STeganography: Turning Vulnerability Of Machine Learning ToAdversarial Attacks Into A Real-world Application
Ghamizi, Salah UL; Cordy, Maxime UL; Papadakis, Mike UL et al

in Proceedings of International Conference on Computer Vision 2021 (2021)

Evasion Attacks have been commonly seen as a weakness of Deep Neural Networks. In this paper, we flip the paradigm and envision this vulnerability as a useful application. We propose EAST, a new ... [more ▼]

Evasion Attacks have been commonly seen as a weakness of Deep Neural Networks. In this paper, we flip the paradigm and envision this vulnerability as a useful application. We propose EAST, a new steganography and watermarking technique based on multi-label targeted evasion attacks. Our results confirm that our embedding is elusive; it not only passes unnoticed by humans, steganalysis methods, and machine-learning detectors. In addition, our embedding is resilient to soft and aggressive image tampering (87% recovery rate under jpeg compression). EAST outperforms existing deep-learning-based steganography approaches with images that are 70% denser and 73% more robust and supports multiple datasets and architectures. [less ▲]

Detailed reference viewed: 151 (25 UL)
Full Text
Peer Reviewed
See detailRequirements And Threat Models of Adversarial Attacks and Robustness of Chest X-ray classification
Ghamizi, Salah UL; Cordy, Maxime UL; Papadakis, Mike UL et al

E-print/Working paper (2021)

Vulnerability to adversarial attacks is a well-known weakness of Deep Neural Networks. While most of the studies focus on natural images with standardized benchmarks like ImageNet and CIFAR, little ... [more ▼]

Vulnerability to adversarial attacks is a well-known weakness of Deep Neural Networks. While most of the studies focus on natural images with standardized benchmarks like ImageNet and CIFAR, little research has considered real world applications, in particular in the medical domain. Our research shows that, contrary to previous claims, robustness of chest x-ray classification is much harder to evaluate and leads to very different assessments based on the dataset, the architecture and robustness metric. We argue that previous studies did not take into account the peculiarity of medical diagnosis, like the co-occurrence of diseases, the disagreement of labellers (domain experts), the threat model of the attacks and the risk implications for each successful attack. In this paper, we discuss the methodological foundations, review the pitfalls and best practices, and suggest new methodological considerations for evaluating the robustness of chest xray classification models. Our evaluation on 3 datasets, 7 models, and 18 diseases is the largest evaluation of robustness of chest x-ray classification models. We believe our findings will provide reliable guidelines for realistic evaluation and improvement of the robustness of machine learning models for medical diagnosis. [less ▲]

Detailed reference viewed: 114 (14 UL)
Full Text
Peer Reviewed
See detailA Replication Study on the Usability of Code Vocabulary in Predicting Flaky Tests
Haben, Guillaume UL; Habchi, Sarra UL; Papadakis, Mike UL et al

in 18th International Conference on Mining Software Repositories (2021, May)

Abstract—Industrial reports indicate that flaky tests are one of the primary concerns of software testing mainly due to the false signals they provide. To deal with this issue, researchers have developed ... [more ▼]

Abstract—Industrial reports indicate that flaky tests are one of the primary concerns of software testing mainly due to the false signals they provide. To deal with this issue, researchers have developed tools and techniques aiming at (automatically) identifying flaky tests with encouraging results. However, to reach industrial adoption and practice, these techniques need to be replicated and evaluated extensively on multiple datasets, occasions and settings. In view of this, we perform a replication study of a recently proposed method that predicts flaky tests based on their vocabulary. We thus replicate the original study on three different dimensions. First we replicate the approach on the same subjects as in the original study but using a different evaluation methodology, i.e., we adopt a time-sensitive selection of training and test sets to better reflect the envisioned use case. Second, we consolidate the findings of the initial study by building a new dataset of 837 flaky tests from 9 projects in a different programming language, i.e., Python while the original study was in Java, which comforts the generalisability of the results. Third, we propose an extension to the original approach by experimenting with different features extracted from the Code Under Test. Our results demonstrate that a more robust validation has a consistent negative impact on the reported results of the original study, but, fortunately, these do not invalidate the key conclusions of the study. We also find re-assuring results that the vocabulary-based models can also be used to predict test flakiness in Python and that the information lying in the Code Under Test has a limited impact in the performance of the vocabulary-based models [less ▲]

Detailed reference viewed: 221 (23 UL)
Full Text
See detailEfficient and Transferable Adversarial Examples from Bayesian Neural Networks
Gubri, Martin UL; Cordy, Maxime UL; Papadakis, Mike UL et al

E-print/Working paper (2021)

An established way to improve the transferability of black-box evasion attacks is to craft the adversarial examples on a surrogate ensemble model to increase diversity. We argue that transferability is ... [more ▼]

An established way to improve the transferability of black-box evasion attacks is to craft the adversarial examples on a surrogate ensemble model to increase diversity. We argue that transferability is fundamentally related to epistemic uncertainty. Based on a state-of-the-art Bayesian Deep Learning technique, we propose a new method to efficiently build a surrogate by sampling approximately from the posterior distribution of neural network weights, which represents the belief about the value of each parameter. Our extensive experiments on ImageNet and CIFAR-10 show that our approach improves the transfer rates of four state-of-the-art attacks significantly (up to 62.1 percentage points), in both intra-architecture and inter-architecture cases. On ImageNet, our approach can reach 94% of transfer rate while reducing training computations from 11.6 to 2.4 exaflops, compared to an ensemble of independently trained DNNs. Our vanilla surrogate achieves 87.5% of the time higher transferability than 3 test-time techniques designed for this purpose. Our work demonstrates that the way to train a surrogate has been overlooked although it is an important element of transfer-based attacks. We are, therefore, the first to review the effectiveness of several training methods in increasing transferability. We provide new directions to better understand the transferability phenomenon and offer a simple but strong baseline for future work. [less ▲]

Detailed reference viewed: 29 (1 UL)
Full Text
Peer Reviewed
See detailKilling Stubborn Mutants with Symbolic Execution
Titcheu Chekam, Thierry UL; Papadakis, Mike UL; Cordy, Maxime UL et al

in ACM Transactions on Software Engineering and Methodology (2021), 30(2), 191--1923

Detailed reference viewed: 241 (16 UL)
Full Text
See detailAdversarial Robustness in Multi-Task Learning: Promises and Illusions
Ghamizi, Salah UL; Cordy, Maxime UL; Papadakis, Mike UL et al

E-print/Working paper (2021)

Vulnerability to adversarial attacks is a well-known weakness of Deep Neural networks. While most of the studies focus on single-task neural networks with computer vision datasets, very little research ... [more ▼]

Vulnerability to adversarial attacks is a well-known weakness of Deep Neural networks. While most of the studies focus on single-task neural networks with computer vision datasets, very little research has considered complex multi-task models that are common in real applications. In this paper, we evaluate the design choices that impact the robustness of multi-task deep learning networks. We provide evidence that blindly adding auxiliary tasks, or weighing the tasks provides a false sense of robustness. Thereby, we tone down the claim made by previous research and study the different factors which may affect robustness. In particular, we show that the choice of the task to incorporate in the loss function are important factors that can be leveraged to yield more robust models. [less ▲]

Detailed reference viewed: 115 (8 UL)
Full Text
Peer Reviewed
See detailStatistical model checking for variability-intensive systems: applications to bug detection and minimization
Cordy, Maxime UL; Lazreg, Sami UL; Papadakis, Mike UL et al

in Formal Aspects of Computing (2021), 33(6), 1147--1172

Detailed reference viewed: 59 (8 UL)
Full Text
Peer Reviewed
See detailMuDelta: Delta-Oriented Mutation Testing at Commit Time
Ma, Wei UL; Thierry Titcheu, Chekam; Papadakis, Mike UL et al

in International Conference on Software Engineering (ICSE) (2021)

Detailed reference viewed: 125 (19 UL)
Full Text
Peer Reviewed
See detailCONFUZZION: A Java Virtual Machine Fuzzer for Type Confusion Vulnerabilities
Bonnaventure, William; Khanfir, Ahmed UL; Bartel, Alexandre et al

in IEEE International Conference on Software Quality, Reliability, and Security (QRS), 2021 (2021)

Detailed reference viewed: 65 (12 UL)
Full Text
Peer Reviewed
See detailTest Selection for Deep Learning Systems
Ma, Wei UL; Papadakis, Mike UL; Tsakmalis, Anestis et al

in ACM Transactions on Software Engineering and Methodology (2021), 30(2), 131--1322

Detailed reference viewed: 363 (41 UL)
See detailCerebro: Static Subsuming Mutant Selection
Garg, Aayush UL; Ojdanic, Milos UL; Degiovanni, Renzo Gaston UL et al

E-print/Working paper (2021)

Detailed reference viewed: 77 (25 UL)
Full Text
Peer Reviewed
See detailTowards Exploring the Limitations of Active Learning: An Empirical Study
Hu, Qiang UL; Guo, Yuejun UL; Cordy, Maxime UL et al

in The 36th IEEE/ACM International Conference on Automated Software Engineering. (2021)

Deep neural networks (DNNs) are being increasingly deployed as integral parts of software systems. However, due to the complex interconnections among hidden layers and massive hyperparameters, DNNs ... [more ▼]

Deep neural networks (DNNs) are being increasingly deployed as integral parts of software systems. However, due to the complex interconnections among hidden layers and massive hyperparameters, DNNs require being trained using a large number of labeled inputs, which calls for extensive human effort for collecting and labeling data. Spontaneously, to alleviate this growing demand, a surge of state-of-the-art studies comes up with different metrics to select a small yet informative dataset for the model training. These research works have demonstrated that DNN models can achieve competitive performance using a carefully selected small set of data. However, the literature lacks proper investigation of the limitations of data selection metrics, which is crucial to apply them in practice. In this paper, we fill this gap and conduct an extensive empirical study to explore the limits of selection metrics. Our study involves 15 selection metrics evaluated over 5 datasets (2 image classification tasks and 3 text classification tasks), 10 DNN architectures, and 20 labeling budgets (ratio of training data being labeled). Our findings reveal that, while selection metrics are usually effective in producing accurate models, they may induce a loss of model robustness (against adversarial examples) and resilience to compression. Overall, we demonstrate the existence of a trade-off between labeling effort and different model qualities. This paves the way for future research in devising selection metrics considering multiple quality criteria. [less ▲]

Detailed reference viewed: 209 (47 UL)
Full Text
Peer Reviewed
See detailData-driven simulation and optimization for covid-19 exit strategies
Ghamizi, Salah UL; Rwemalika, Renaud UL; Cordy, Maxime UL et al

in Ghamizi, Salah; Rwemalika, Renaud; Cordy, Maxime (Eds.) et al Data-driven simulation and optimization for covid-19 exit strategies (2020, August)

The rapid spread of the Coronavirus SARS-2 is a major challenge that led almost all governments worldwide to take drastic measures to respond to the tragedy. Chief among those measures is the massive ... [more ▼]

The rapid spread of the Coronavirus SARS-2 is a major challenge that led almost all governments worldwide to take drastic measures to respond to the tragedy. Chief among those measures is the massive lockdown of entire countries and cities, which beyond its global economic impact has created some deep social and psychological tensions within populations. While the adopted mitigation measures (including the lockdown) have generally proven useful, policymakers are now facing a critical question: how and when to lift the mitigation measures? A carefully-planned exit strategy is indeed necessary to recover from the pandemic without risking a new outbreak. Classically, exit strategies rely on mathematical modeling to predict the effect of public health interventions. Such models are unfortunately known to be sensitive to some key parameters, which are usually set based on rules-of-thumb.In this paper, we propose to augment epidemiological forecasting with actual data-driven models that will learn to fine-tune predictions for different contexts (e.g., per country). We have therefore built a pandemic simulation and forecasting toolkit that combines a deep learning estimation of the epidemiological parameters of the disease in order to predict the cases and deaths, and a genetic algorithm component searching for optimal trade-offs/policies between constraints and objectives set by decision-makers.Replaying pandemic evolution in various countries, we experimentally show that our approach yields predictions with much lower error rates than pure epidemiological models in 75% of the cases and achieves a 95% R² score when the learning is transferred and tested on unseen countries. When used for forecasting, this approach provides actionable insights into the impact of individual measures and strategies. [less ▲]

Detailed reference viewed: 131 (14 UL)
Full Text
Peer Reviewed
See detailStatistical Model Checking for Variability-Intensive Systems
Cordy, Maxime UL; Papadakis, Mike UL; Legay, Axel

in FUNDAMENTAL APPROACHES TO SOFTWARE ENGINEERING, Dublin 22-25 April 2020 (2020, April)

Detailed reference viewed: 85 (3 UL)
Full Text
Peer Reviewed
See detailFeatureNET: Diversity-driven Generation of Deep Learning Models
Ghamizi, Salah UL; Cordy, Maxime UL; Papadakis, Mike UL et al

in International Conference on Software Engineering (ICSE) (2020)

Detailed reference viewed: 71 (12 UL)