Article (Scientific journals)
Rethinking Cognitive Complexity for Unit Tests: Toward a Readability-Aware Metric Grounded in Developer Perception
OUEDRAOGO, Wendkûuni Arzouma Marc Christian; LI, Yinghua; DANG, Xueqi et al.
2025In 2025 IEEE International Conference on Software Maintenance and Evolution, p. 797-802
Peer reviewed Dataset
 

Files


Full Text
Rethinking_Cognitive_Complexity_for_Unit_Tests__Toward_a_Readability_Aware_Metric_Grounded_in_Developer_Perception (5).pdf
Author postprint (170.93 kB)
Download

All documents in ORBilu are protected by a user license.

Send to



Details



Keywords :
Measurement; Software maintenance; Codes; Large language models; Semantics; Software quality; Complexity theory; Software reliability; Test pattern generators; Software engineering
Abstract :
[en] Automatically generated unit tests-from searchbased tools like EvoSuite or LLMs-vary significantly in structure and readability. Yet most evaluations rely on metrics like Cyclomatic Complexity and Cognitive Complexity, designed for functional code rather than test code. Recent studies have shown that SonarSource's Cognitive Complexity metric assigns nearzero scores to LLM-generated tests, yet its behavior on EvoSuitegenerated tests and its applicability to test-specific code structures remain unexplored. We introduce CCTR, a Test-Aware Cognitive Complexity metric tailored for unit tests. CCTR integrates structural and semantic features like assertion density, annotation roles, and test composition patterns-dimensions ignored by traditional complexity models but critical for understanding test code. We evaluate 15,750 test suites generated by EvoSuite, GPT4o, and Mistral Large-1024 across 350 classes from Defects4J and SF110. Results show CCTR effectively discriminates between structured and fragmented test suites, producing interpretable scores that better reflect developer-perceived effort. By bridging structural analysis and test readability, CCTR provides a foundation for more reliable evaluation and improvement of generated tests. We publicly release all data, prompts, and evaluation scripts to support replication.
Disciplines :
Computer science
Author, co-author :
OUEDRAOGO, Wendkûuni Arzouma Marc Christian  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > TruX
LI, Yinghua  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > TruX
DANG, Xueqi  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SerVal
Zhou, Xin;  Singapore Management University
KOYUNCU, Anil ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust > TruX > Team Tegawendé François d A BISSYANDE ; Bilkent University
KLEIN, Jacques  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > TruX
Lo, David;  Singapore Management University
BISSYANDE, Tegawendé  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > TruX
External co-authors :
yes
Language :
English
Title :
Rethinking Cognitive Complexity for Unit Tests: Toward a Readability-Aware Metric Grounded in Developer Perception
Publication date :
07 September 2025
Journal title :
2025 IEEE International Conference on Software Maintenance and Evolution
eISSN :
2576-3148
Publisher :
IEEE, Auckland, New Zealand
Pages :
797-802
Peer reviewed :
Peer reviewed
Available on ORBilu :
since 11 November 2025

Statistics


Number of views
48 (6 by Unilu)
Number of downloads
73 (6 by Unilu)

Scopus citations®
 
0
Scopus citations®
without self-citations
0
OpenCitations
 
0
OpenAlex citations
 
0

Bibliography


Similar publications



Contact ORBilu