Configurable systems; Sampling; SAT; Software product lines; Variability model; Benchmarking platforms; Boolean formulae; Random sampling; Sampling space; Sampling technique; Software Product Line; State of the art; Variability modeling; Software; Information Systems; Modeling and Simulation; Computational Theory and Mathematics
Abstract :
[en] BURST is a benchmarking platform for uniform random sampling (URS) techniques. Given: i) the description of a sampling space provided as a Boolean formula (DIMACS), and ii) a sampling budget (time and strength of uniformity), BURST evaluates ten samplers for scalability and uniformity. BURST measures scalability based on the time required to produce a sample, and uniformity based on the state-of-the-art and proven statistical test Barbarik. BURST is easily extendable to new samplers and offers: i) 128 feature models (for highly-configurable systems), ii) many other models mined from the artificial intelligence/satisfiability solving benchmarks. BURST envisions supporting URS assessment and design across multiple research communities.
Disciplines :
Computer science
Author, co-author :
Acher, Mathieu ; Univ Rennes, CNRS, Inria, IRISA, Institut Universitaire de France (IUF), France
Perrouin, Gilles; PReCISE, NaDI, Faculty of Computer Science, University of Namur, Belgium
CORDY, Maxime ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SerVal
External co-authors :
yes
Language :
English
Title :
BURST: Benchmarking uniform random sampling techniques
The authors would particularly like to thank Kuldeep S. Meel from National University of Singapore, Mate Soos from Zalando Germany and their colleagues for their help setting up and fixing Barbarik as well as the CMS samplers. This research was partly funded by the ANR-17-CE25-0010-01 VaryVary project. Gilles Perrouin is a Research Associate at the FNRS. Maxime Cordy was supported by FNR Luxembourg (grant C19/IS/13566661/BEEHIVE/Cordy).The authors would particularly like to thank Kuldeep S. Meel from National University of Singapore, Mate Soos from Zalando Germany and their colleagues for their help setting up and fixing Barbarik as well as the CMS samplers. This research was partly funded by the ANR - 17-CE25-0010-01 VaryVary project. Gilles Perrouin is a Research Associate at the FNRS. Maxime Cordy was supported by FNR Luxembourg (grant C19/IS/13566661/BEEHIVE/Cordy ).
scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.
Bibliography
Kang, K., Cohen, S., Hess, J., Novak, W., Peterson, S., Feature-Oriented Domain Analysis (FODA). Tech. Rep. CMU/SEI-90-TR-21, Nov. 1990, SEI.
Schobbens, P.-Y., Heymans, P., Trigaux, J.-C., Feature diagrams: a survey and a formal semantics. RE'06: Proceedings of the 14th IEEE International Requirements Engineering Conference (RE'06), 2006, IEEE Computer Society, Washington, DC, USA, 136–145, 10.1109/RE.2006.23.
F. Medeiros, C. Kästner, M. Ribeiro, R. Gheyi, S. Apel, A comparison of 10 sampling algorithms for configurable systems, in: ICSE'16.
Halin, A., Nuttinck, A., Acher, M., Devroey, X., Perrouin, G., Baudry, B., Test them all, is it worth it? Assessing configuration sampling on the jhipster web development stack. Empir. Softw. Eng. 24:2 (2019), 674–717.
Cordy, M., Papadakis, M., Legay, A., Statistical model checking for variability-intensive systems. International Conference on Fundamental Approaches to Software Engineering, 2020, Springer, 294–314.
de Perthuis de Laillevault, A., Doerr, B., Doerr, C., Money for nothing: speeding up evolutionary algorithms through better initialization. Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, GECCO'15, 2015, ACM, New York, NY, USA, 815–822, 10.1145/2739480.2754760 http://doi.acm.org/10.1145/2739480.2754760.
Oh, J., Batory, D.S., Myers, M., Siegmund, N., Finding near-optimal configurations in product lines by random sampling. Bodden, E., Schäfer, W., van Deursen, A., Zisman, A., (eds.) Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering, ESEC/FSE 2017, Paderborn, Germany, September 4-8, 2017, 2017, ACM, 61–71, 10.1145/3106237.3106273.
Heradio, R., Fernandez-Amoros, D., Galindo, J.A., Benavides, D., Uniform and scalable sat-sampling for configurable systems. Proceedings of the 24th ACM Conference on Systems and Software Product Line: Volume A-Volume A, 2020, 1–11.
Soos, M., Meel Bird, K.S., Engineering an efficient cnf-xor sat solver and its applications to approximate model counting. Proceedings of AAAI Conference on Artificial Intelligence (AAAI), 2019.
Soos, M., Gocht, S., Meel, K.S., Tinted, detached, and lazy cnf-xor solving and its applications to counting and sampling. Proceedings of International Conference on Computer-Aided Verification (CAV), 2020.
Dutra, R., Laeufer, K., Bachrach, J., Sen, K., Efficient sampling of SAT solutions for testing. Proceedings of the 40th International Conference on Software Engineering, ICSE 2018, Gothenburg, Sweden, May 27 - June 03, 2018, 2018, 549–559, 10.1145/3180155.3180248 http://doi.acm.org/10.1145/3180155.3180248.
Plazar, Q., Acher, M., Perrouin, G., Devroey, X., Cordy, M., Uniform sampling of sat solutions for configurable systems: are we there yet?. ICST'19, 2019.
Knüppel, A., Thüm, T., Mennicke, S., Meinicke, J., Schaefer, I., Is there a mismatch between real-world feature models and product-line research?. Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering, ESEC/FSE 2017, Paderborn, Germany, September 4-8, 2017, 2017, 291–302, 10.1145/3106237.3106252 http://doi.acm.org/10.1145/3106237.3106252.
Krieter, S., Thüm, T., Schulze, S., Schröter, R., Saake, G., Propagating configuration decisions with modal implication graphs. Proceedings of the 40th International Conference on Software Engineering, ICSE 2018, Gothenburg, Sweden, May 27 - June 03, 2018, 2018, 898–909, 10.1145/3180155.3180159 http://doi.acm.org/10.1145/3180155.3180159.
Liang, J.H., Ganesh, V., Czarnecki, K., Raman, V., Sat-based analysis of large real-world feature models is easy. Proceedings of the 19th International Conference on Software Product Line, SPLC'15, 2015, ACM, New York, NY, USA, 91–100, 10.1145/2791060.2791070 http://doi.acm.org/10.1145/2791060.2791070.
Raible, M., The JHipster mini-book. 2015, C4Media.
Chakraborty, S., Meel, K.S., On testing of uniform samplers. The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, 2019, AAAI Press, 7777–7784, 10.1609/aaai.v33i01.33017777.
Achlioptas, D., Hammoudeh, Z., Theodoropoulos, P., Fast sampling of perfectly uniform satisfying assignments. SAT, 2018.
Golia, P., Soos, M., Chakraborty, S., Meel, K.S., Designing samplers is easy: the boon of testers. 2021 Formal Methods in Computer Aided Design (FMCAD), 2021, IEEE, 222–230.
Acher, M., Perrouin, G., Cordy, M., Burst: a benchmarking platform for uniform random sampling techniques. Proceedings of the 25th ACM International Systems and Software Product Line Conference-Volume B, 2021, 36–40.
Meel, K.S., Pote, Y., Chakraborty, S., On testing of samplers. Advances in Neural Information Processing Systems(NeurIPS), 2020.
Heradio, R., Fernandez-Amoros, D., Galindo, J.A., Benavides, D., Batory, D., Uniform and scalable sampling of highly configurable systems. Empir. Softw. Eng. 27:2 (2022), 1–34.
Sharma, S., Gupta, R., Roy, S., Meel, K.S., Knowledge compilation meets uniform sampling. Proceedings of International Conference on Logic for Programming Artificial Intelligence and Reasoning (LPAR), 2018.
Similar publications
Sorry the service is unavailable at the moment. Please try again later.