References of "Degiovanni, Renzo Gaston 50035518"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailµBert: Mutation Testing using Pre-Trained Language Models
Degiovanni, Renzo Gaston UL; Papadakis, Mike UL

in Degiovanni, Renzo Gaston; Papadakis, Mike (Eds.) µBert: Mutation Testing using Pre-Trained Language Models (2022)

We introduce µBert, a mutation testing tool that uses a pre-trained language model (CodeBERT) to generate mutants. This is done by masking a token from the expression given as input and using CodeBERT to ... [more ▼]

We introduce µBert, a mutation testing tool that uses a pre-trained language model (CodeBERT) to generate mutants. This is done by masking a token from the expression given as input and using CodeBERT to predict it. Thus, the mutants are generated by replacing the masked tokens with the predicted ones. We evaluate µBert on 40 real faults from Defects4J and show that it can detect 27 out of the 40 faults, while the baseline (PiTest) detects 26 of them. We also show that µBert can be 2 times more cost-effective than PiTest, when the same number of mutants are analysed. Additionally, we evaluate the impact of µBert's mutants when used by program assertion inference techniques, and show that they can help in producing better specifications. Finally, we discuss about the quality and naturalness of some interesting mutants produced by µBert during our experimental evaluation. [less ▲]

Detailed reference viewed: 79 (0 UL)
See detailCerebro: Static Subsuming Mutant Selection
Garg, Aayush UL; Ojdanic, Milos UL; Degiovanni, Renzo Gaston UL et al

E-print/Working paper (2021)

Detailed reference viewed: 86 (28 UL)
Full Text
Peer Reviewed
See detailAn evolutionary approach to translating operational specifications into declarative specifications
Molina, Facundo; Cornejo, César; Degiovanni, Renzo Gaston UL et al

in Science of Computer Programming (2019), 181

Various tools for program analysis, including run-time assertion checkers and static analyzers such as verification and test generation tools, require formal specifications of the programs being analyzed ... [more ▼]

Various tools for program analysis, including run-time assertion checkers and static analyzers such as verification and test generation tools, require formal specifications of the programs being analyzed. Moreover, many of these tools and techniques require such specifications to be written in a particular style, or follow certain patterns, in order to obtain an acceptable performance from the corresponding analyses. Thus, having a formal specification sometimes is not enough for using a particular technique, since such specification may not be provided in the right formalism. In this paper, we deal with this problem in the increasingly common case of having an operational specification, while for analysis reasons requiring a declarative specification. We propose an evolu- tionary approach to translate an operational specification written in a sequen- tial programming language, into a declarative specification, in relational logic. We perform experiments on a benchmark of data structure implementations, for which operational invariants are available, and show that our evolutionary computation based approach to translating specifications achieves very good precision in this context, and produces declarative specifications that are more amenable to analyses that demand specifications in this style. This is assessed in two contexts: bounded verification of data structure invariant preservation, and instance enumeration using symbolic execution aided by tight bounds. [less ▲]

Detailed reference viewed: 121 (3 UL)
Full Text
Peer Reviewed
See detailTraining binary classifiers as data structure invariants
Molina, Facundo; Degiovanni, Renzo Gaston UL; Ponzio, Pablo et al

in Proceedings of the 41st International Conference on Software Engineering ICSE 2019, Montreal, QC, Canada, May 25-31, 2019 (2019)

We present a technique that enables us to distinguish valid from invalid data structure objects. The technique is based on building an artificial neural network, more precisely a binary classifier, and ... [more ▼]

We present a technique that enables us to distinguish valid from invalid data structure objects. The technique is based on building an artificial neural network, more precisely a binary classifier, and training it to identify valid and invalid instances of a data structure. The obtained classifier can then be used in place of the data structure’s invariant, in order to attempt to identify (in)correct behaviors in programs manipulating the structure. In order to produce the valid objects to train the network, an assumed-correct set of object building routines is randomly executed. Invalid instances are produced by generating values for object fields that “break” the collected valid values, i.e., that assign values to object fields that have not been observed as feasible in the assumed-correct program executions that led to the collected valid instances. We experimentally assess this approach, over a benchmark of data structures. We show that this learning technique produces classifiers that achieve significantly better accuracy in classifying valid/invalid objects compared to a technique for dynamic invariant detection, and leads to improved bug finding. [less ▲]

Detailed reference viewed: 116 (3 UL)