References of "Liu, Kui"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailReliable Fix Patterns Inferred from Static Checkers for Automated Program Repair
Liu, Kui; Zhang, Jingtang; Li, Li et al

in ACM Transactions on Software Engineering and Methodology (2023)

Full Text
Peer Reviewed
See detailCrex: Predicting patch correctness in automated repair of C programs through transfer learning of execution semantics
Yan, Dapeng; Liu, Kui; Niu, Yuqing et al

in Information and Software Technology (2022), 152

Full Text
Peer Reviewed
See detailPEELER: Learning to Effectively Predict Flakiness without Running Tests
Qin, Yihao; Wang, Shangwen; Liu, Kui et al

in Proceedings of the 38th IEEE International Conference on Software Maintenance and Evolution (2022, October)

—Regression testing is a widely adopted approach to expose change-induced bugs as well as to verify the correctness/robustness of code in modern software development settings. Unfortunately, the ... [more ▼]

—Regression testing is a widely adopted approach to expose change-induced bugs as well as to verify the correctness/robustness of code in modern software development settings. Unfortunately, the occurrence of flaky tests leads to a significant increase in the cost of regression testing and eventually reduces the productivity of developers (i.e., their ability to find and fix real problems). State-of-the-art approaches leverage dynamic test information obtained through expensive re-execution of test cases to effectively identify flaky tests. Towards accounting for scalability constraints, some recent approaches have built on static test case features, but fall short on effectiveness. In this paper, we introduce PEELER, a new fully static approach for predicting flaky tests through exploring a representation of test cases based on the data dependency relations. The predictor is then trained as a neural network based model, which achieves at the same time scalability (because it does not require any test execution), effectiveness (because it exploits relevant test dependency features), and practicality (because it can be applied in the wild to find new flaky tests). Experimental validation on 17,532 test cases from 21 Java projects shows that PEELER outperforms the state-of-the-art FlakeFlagger by around 20 percentage points: we catch 22% more flaky tests while yielding 51% less false positives. Finally, in a live study with projects in-the-wild, we reported to developers 21 flakiness cases, among which 12 have already been confirmed by developers as being indeed flaky. [less ▲]

Detailed reference viewed: 67 (10 UL)
Full Text
Peer Reviewed
See detailDigBug—Pre/post-processing operator selection for accurate bug localization
Kim, Kisub; Ghatpande, Sankalp UL; Liu, Kui et al

in Journal of Systems and Software (2022), 189

Bug localization is a recurrent maintenance task in software development. It aims at identifying relevant code locations (e.g., code files) that must be inspected to fix bugs. When such bugs are reported ... [more ▼]

Bug localization is a recurrent maintenance task in software development. It aims at identifying relevant code locations (e.g., code files) that must be inspected to fix bugs. When such bugs are reported by users, the localization process become often overwhelming as it is mostly a manual task due to incomplete and informal information (written in natural languages) available in bug reports. The research community has then invested in automated approaches, notably using Information Retrieval techniques. Unfortunately, reported performance in the literature is still limited for practical usage. Our key observation, after empirically investigating a large dataset of bug reports as well as workflow and results of state-of-the-art approaches, is that most approaches attempt localization for every bug report without considering the different characteristics of the bug reports. We propose DigBug as a straightforward approach to specialized bug localization. This approach selects pre/post-processing operators based on the attributes of bug reports; and the bug localization model is parameterized in accordance as well. Our experiments confirm that departing from “one-size-fits-all” approaches, DigBug outperforms the state-of-the-art techniques by 6 and 14 percentage points, respectively in terms of MAP and MRR on average. [less ▲]

Detailed reference viewed: 28 (1 UL)
Full Text
Peer Reviewed
See detailPredicting Patch Correctness Based on the Similarity of Failing Test Cases
Tian, Haoye UL; Li, Yinghua UL; Pian, Weiguo UL et al

in ACM Transactions on Software Engineering and Methodology (2022)

Detailed reference viewed: 66 (38 UL)
Full Text
Peer Reviewed
See detailIs this Change the Answer to that Problem? Correlating Descriptions of Bug and Code Changes for Evaluating Patch Correctness
Tian, Haoye UL; Tang, Xunzhu UL; Habib, Andrew UL et al

in Is this Change the Answer to that Problem? Correlating Descriptions of Bug and Code Changes for Evaluating Patch Correctness (2022)

Detailed reference viewed: 25 (9 UL)
Full Text
Peer Reviewed
See detailIs this Change the Answer to that Problem? Correlating Descriptions of Bug and Code Changes for Evaluating Patch Correctness
Tian, Haoye UL; Tang, Xunzhu UL; Habib, Andrew UL et al

in Is this Change the Answer to that Problem? Correlating Descriptions of Bug and Code Changes for Evaluating Patch Correctness (2022)

Detailed reference viewed: 48 (26 UL)
Full Text
Peer Reviewed
See detailRevisiting Test Cases to Boost Generate-and-Validate Program Repair
Zhang, Jingtang; Liu, Kui; Kim, Dongsun et al

in IEEE International Conference on Software Maintenance and Evolution (ICSME) (2021, September)

Detailed reference viewed: 43 (1 UL)
Full Text
Peer Reviewed
See detailSmartGift: Learning to Generate Practical Inputs for Testing Smart Contracts
Zhou, Teng; Liu, Kui; Li, Li et al

in IEEE International Conference on Software Maintenance and Evolution (ICSME) (2021, September)

Detailed reference viewed: 43 (1 UL)
Full Text
Peer Reviewed
See detailWhere were the repair ingredients for Defects4j bugs?
Yang, Deheng; Liu, Kui; Kim, Dongsun et al

in Empirical Software Engineering (2021), 26(6), 1--33

Detailed reference viewed: 40 (7 UL)