References of "Di Nardo, Daniel 50001692"
     in
Bookmark and Share    
Full Text
See detailModel-Based Test Automation Strategies for Data Processing Systems
Di Nardo, Daniel UL

Doctoral thesis (2016)

Data processing software is an essential component of systems that aggregate and analyse real-world data, thereby enabling automated interaction between such systems and the real world. In data processing ... [more ▼]

Data processing software is an essential component of systems that aggregate and analyse real-world data, thereby enabling automated interaction between such systems and the real world. In data processing systems, inputs are often big and complex files that have a well-defined structure, and that often have dependencies between several of their fields. Testing of data processing systems is complex. Software engineers, in charge of testing these systems, have to handcraft complex data files of nontrivial size, while ensuring compliance with the multiple constraints to prevent the generation of trivially invalid inputs. In addition, assessing test results often means analysing complex output and log data. Complex inputs pose a challenge for the adoption of automated test data generation techniques; the adopted techniques should be able to deal with the generation of a nontrivial number of data items having complex nested structures while preserving the constraints between data fields. An additional challenge regards the automated validation of execution results. To address the challenges of testing data processing systems, this dissertation presents a set of approaches based on data modelling and data mutation to automate testing. We propose a modelling methodology that captures the input and output data and the dependencies between them by using Unified Modeling Language (UML) class diagrams and constraints expressed in the Object Constraint Language (OCL). The UML class diagram captures the structure of the data, while the OCL constraints formally describe the interactions and associations between the data fields within the different subcomponents. The work of this dissertation was motived by the testing needs of an industrial satellite Data Acquisition (DAQ) system; this system is the subject of the empirical studies used within this dissertation to demonstrate the application and suitability of the approaches that we propose. We present four model-driven approaches that address the challenges of automatically testing data processing systems. These approaches are supported by the data models generated according to our modelling methodology. The results of an empirical evaluation show that the application of the modelling methodology is scalable as the size of the model and constraints was manageable for the subject system. The first approach is a technique for the automated validation of test inputs and oracles; an empirical evaluation shows that the approach is scalable as the input and oracle validation process executed within reasonable times on real input files. The second approach is a model-based technique that automatically generates faulty test inputs for the purpose of robustness testing, by relying upon generic mutation operators that alter data collected in the field; an empirical evaluation shows that our automated approach achieves slightly better instruction coverage than the manual testing taking place in practice. The third approach is an evolutionary algorithm to automate the robustness testing of data processing systems through optimised test suites; the empirical results obtained by applying our search-based testing approach show that it outperforms approaches based on fault coverage and random generation: higher coverage is achieved with smaller test suites. Finally, the fourth approach is an automated, model-based approach that reuses field data to generate test inputs that fit new data requirements for the purpose of testing data processing systems; the empirical evaluation shows that the input generation algorithm based on model slicing and constraint solving scales in the presence of complex data structures. [less ▲]

Detailed reference viewed: 267 (46 UL)
Full Text
Peer Reviewed
See detailEvolutionary Robustness Testing of Data Processing Systems using Models and Data Mutation
Di Nardo, Daniel UL; Pastore, Fabrizio UL; Arcuri, Andrea UL et al

in Proceedings of the 30th IEEE/ACM International Conference on Automated Software Engineering (2015, November)

System level testing of industrial data processing software poses several challenges. Input data can be very large, even in the order of gigabytes, and with complex constraints that define when an input ... [more ▼]

System level testing of industrial data processing software poses several challenges. Input data can be very large, even in the order of gigabytes, and with complex constraints that define when an input is valid. Generating the right input data to stress the system for robustness properties (e.g. to test how faulty data is handled) is hence very complex, tedious and error prone when done manually. Unfortunately, this is the current practice in industry. In previous work, we defined a methodology to model the structure and the constraints of input data by using UML class diagrams and OCL constraints. Tests were automatically derived to cover predefined fault types in a fault model. In this paper, to obtain more effective system level test cases, we developed a novel search-based test generation tool. Experiments on a real-world, large industrial data processing system show that our automated approach can not only achieve better code coverage, but also accomplishes this using significantly smaller test suites. [less ▲]

Detailed reference viewed: 385 (53 UL)
Full Text
Peer Reviewed
See detailCoverage-based regression test case selection, minimization and prioritization: a case study on an industrial system
Di Nardo, Daniel UL; Alshahwan, Nadia; Briand, Lionel UL et al

in Software Testing, Verification and Reliability (2015), 25(4), 371-396

Detailed reference viewed: 481 (51 UL)
Full Text
Peer Reviewed
See detailGenerating Complex and Faulty Test Data Through Model-Based Mutation Analysis
Di Nardo, Daniel UL; Pastore, Fabrizio UL; Briand, Lionel UL

in Software Testing, Verification and Validation (ICST), 2015 IEEE Eighth International Conference on (2015, April)

Testing the correct behaviour of data processing systems in the presence of faulty data is extremely expensive. The data structures processed by these systems are often complex, with many data fields and ... [more ▼]

Testing the correct behaviour of data processing systems in the presence of faulty data is extremely expensive. The data structures processed by these systems are often complex, with many data fields and multiple constraints among them. Software engineers, in charge of testing these systems, have to handcraft complex data files or databases, while ensuring compliance with the multiple constraints to prevent the generation of trivially invalid inputs. In addition, assessing test results often means analysing complex output and log data. Though many techniques have been proposed to automatically test systems based on models, little exists in the literature to support the testing of systems where the complexity is in the data consumed in input or produced in output, with complex constraints between them. In particular, such systems often need to be tested with the presence of faults in the input data, in order to assess the robustness and behaviour of the system in response to such faults. This paper presents an automated test technique that relies upon six generic mutation operators to automatically generate faulty data. The technique receives two inputs: field data and a data model, i.e. a UML class diagram annotated with stereotypes and OCL constraints. The annotated class diagram is used to tailor the behaviour of the generic mutation operators to the fault model that is assumed for the system under test and the environment in which it is deployed. Empirical results obtained with a large data acquisition system in the satellite domain show that our approach can successfully automate the generation of test suites that achieve slightly better instruction coverage than manual testing based on domain expertise. [less ▲]

Detailed reference viewed: 471 (107 UL)
Full Text
Peer Reviewed
See detailModel Based Test Validation and Oracles for Data Acquisition Systems
Di Nardo, Daniel UL; Alshahwan, Nadia UL; Briand, Lionel UL et al

in IEEE/ACM International Conference on Automated Software Engineering (2013, November)

Detailed reference viewed: 373 (59 UL)
Full Text
Peer Reviewed
See detailCoverage-Based Test Case Prioritisation: An Industrial Case Study
Di Nardo, Daniel UL; Alshahwan, Nadia UL; Briand, Lionel UL et al

in IEEE International Conference on Software Testing, Verification and Validation (ICST) (2013, March)

Detailed reference viewed: 351 (25 UL)