[en] Background: Choosing the right software test automation tool is not trivial, and recent industrial surveys indicate lack of right tools as the main obstacle to test automation. Aim: In this paper, we study how practitioners tackle the problem of choosing the right test automation tool. Method: We synthesize the “voice” of the practitioners with a grey literature review originating from 53 different companies. The industry experts behind the sources had roles such as “Software Test Automation Architect”, and “Principal Software Engineer”. Results: Common consensus about the important criteria exists but those are not applied systematically. We summarize the scattered steps from individual sources by presenting a comprehensive process for tool evaluation with 12 steps and a total of 14 different criteria for choosing the right tool. Conclusions: The practitioners tend to have general interest in and be influenced by related grey literature as about 78% of our sources had at least 20 backlinks (a reference comparable to a citation) while the variation was between 3 and 759 backlinks. There is a plethora of different software testing tools available, yet the practitioners seem to prefer and adopt the widely known and used tools. The study helps to identify the potential pitfalls of existing processes and opportunities for comprehensive tool evaluation.
Centre de recherche :
Interdisciplinary Centre for Security, Reliability and Trust (SnT) > Software Verification and Validation Lab (SVV Lab)
Disciplines :
Sciences informatiques
Auteur, co-auteur :
Raulamo-Jurvanen, Päivi
Mäntylä, Mika
GAROUSI, Vahid ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT)
Co-auteurs externes :
yes
Langue du document :
Anglais
Titre :
Choosing the Right Test Automation Tool: a Grey Literature Review of Practitioner Sources
Date de publication/diffusion :
15 juin 2017
Nom de la manifestation :
International Conference on Evaluation and Assessment in Software Engineering (EASE 2017)
Lieu de la manifestation :
Karlskrona, Suède
Date de la manifestation :
15-06-2017 to 16-06-2017
Manifestation à portée :
International
Titre de l'ouvrage principal :
Proceedings of International Conference on Evaluation and Assessment in Software Engineering (EASE 2017)
S. Allard, K. J. Levine and C. Tenopir. 2009. Design engineers and technical professionals at work: Observing information usage in the workplace. J. Am. Soc. Inf. Sci. Technol. 60, 3, 443-454. doi: 10.1002/asi.21004
J. Battelle. 2006. The Search: How Google and Its Rivals Rewrote the Rules of Business and Transformed Our Culture. Nicholas Brealey Publishing, London, UK.
Karen M. Benzies, Shahirose Premji, K. A. Hayden and Karen Serrett. 2006. State-of-the-evidence reviews: advantages and challenges of including grey literature. Worldviews on Evidence-Based Nursing. Wiley Online Library 3, 2, 55-61. DOI: http://onlinelibrary.wiley.com/doi/10.1111/j.1741-6787.2006.00051.x/full
Antonia Bertolino. 2007. Software testing research: Achievements, challenges, dreams. In 2007 Future of Software Engineering IEEE Computer Society, 85-103.
Emil Borjesson and Robert Feldt. 2012. Automated system testing using visual gui testing tools: A comparative study in industry. In Software Testing, Verification and Validation (ICST), 2012 IEEE Fifth International Conference on IEEE, 350-359. DOI: 10.1109/ICST.2012.115
Tilmann Bruckhaus, NH Madhavii, Ingrid Janssen and John Henshaw. 1996. The impact of tools on software productivity. IEEE Software. IEEE 13, 5, 29-38.
Capgemini Consulting. 2015. World Quality Report 2015-2016. https://www.capgemini.com/thought-leadership/world-quality-report-2015-16.
Capgemini Consulting, J. Vaitilo and N. L. Madsen. 2016. World Quality report 2015-2016, Nordic Region. https://www.sogeti.com/globalassets/global/downloads/testing/wqr-2015-16/wqr-2015country-pulloutsnordic-regionv1.pdf.
Elliot J. Chikofsky, David E. Martin and Hugh Chang. 1992. Assessing the state of tools assessment. IEEE Software. IEEE Computer Society 9, 3, 18.
Robert B. Cialdini. 2001. Influence: Science and practice. Allyn & Bacon, Boston.
Tore Dybä, Barbara Kitchenham and Magne Jorgensen. 2005. Evidence-based software engineering for practitioners. Software, IEEE. IEEE 22, 1, 58-65.
Tore Dybå, Neil Maiden and Robert Glass. 2014. The reflective software engineer: reflective practice. IEEE Software. IEEE 31, 4, 32-36. DOI: https://doi.org/10.1109/MS.2014.97
Raya Fidel and Maurice Green. 2004. The many faces of accessibility: engineers' perception of information sources. Information Processing & Management. Elsevier 40, 3, 563-581. DOI: http://dx.doi.org/10.1016/S0306-4573(03)00003-7
Vahid Garousi, Michael Felderer and Mika V. Mäntylä. 2016. The need for multivocal literature reviews in software engineering: complementing systematic literature reviews with grey literature. In Proceedings of the 20th International Conference on Evaluation and Assessment in Software Engineering ACM, 26. DOI:http://dx.doi.org/10.1145/2915970.2916008
Vahid Garousi and Mika V. Mäntylä. 2016. A systematic literature review of literature reviews in software testing. Information and Software Technology. Elsevier 80, 195-216. DOI: http://dx.doi.org/10.1016/j.infsof.2016.09.002
Vahid Garousi and Mika V. Mäntylä. 2016. When and what to automate in software testing? A multi-vocal literature review. Information and Software Technology. Elsevier DOI: http://dx.doi.org/10.1016/j.infsof.2016.04.015
Inc Google. n.d. How Search Works - Inside Search - Google. https://www.google.com/insidesearch/howsearchworks/.
Morten Hertzum and Annelise M. Pejtersen. 2000. The information-seeking practices of engineers: searching for documents as well as for people. Information Processing & Management. 36, 5, 761-778. DOI: http://dx.doi.org/10.1016/S0306-4573(00)00011-X
Jussi Kasurinen, Ossi Taipale and Kari Smolander. 2010. Software test automation in practice: empirical observations. Advances in Software Engineering. Hindawi Publishing Corporation 2010, DOI: http://dx.doi.org/10.1155/2010/620836
Harpreet Kaur and Gagan Gupta. 2013. Comparative Study of Automated Testing Tools: Selenium, Quick Test Professional and Testcomplete. Int. Journal of Engineering Research and Applications ISSN. Citeseer 1739-1743.
Barbara Kitchenham. 2004. Procedures for performing systematic reviews.
A. H. Maslow. 1966. The Psychology of Science: A Reconnaissance. Maurice Bassett, 2004
M. McEvoy. 2015. 7 Reasons Google Search Results Vary Dramatically. http://www.webpresencesolutions.net/7-reasons-google-search-results-vary-dramatically/
Matthew B. Miles, A. M. Huberman and Johnny Saldana. 2013. Qualitative data analysis: A methods sourcebook. SAGE Publications, Incorporated,
Alberto Pan, Juan Raposo, Manuel Álvarez, Justo Hidalgo and Ángel Viña. 2002. Semi-Automatic Wrapper Generation for Commercial Web Sources. Engineering Information Systems in the Internet Context. 231, 265-283.
Kai Petersen, Robert Feldt, Shahid Mujtaba and Michael Mattsson. 2008. Systematic mapping studies in software engineering. In 12th international conference on evaluation and assessment in software engineering
Robert M. Poston and Michael P. Sexton. 1992. Evaluating and selecting testing tools. Software, IEEE. IEEE 9, 3, 33-42.
Dudekula M. Rafi, Katam R. K. Moses, Kai Petersen and Mika V. Mäntylä. 2012. Benefits and limitations of automated software testing: Systematic literature review and practitioner survey. In Proceedings of the 7th International Workshop on Automation of Software Test IEEE Press, 36-42.
P. Raulamo-Jurvanen, K. Kakkonen and M. Mäntylä. 2016. Using Surveys and Web-scarping to Select Tools for Software Testing Consultancy. In Proceedings of the 17th International Conference on Product-Focused Software Process Improvement. P. Abrahamsson et al. (Eds.), PROFES2016.1-16. DOI: 10.1007/978-3-319-49094-6-18
Paivi Raulamo-Jurvanen, Mika V. Mantyla and Vahid Garousi. 2015. Citation and Topic Analysis of the ESEM papers. In 2015 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM) IEEE, 1-4. DOI:https://doi.org/10.1109/ESEM.2015.7321193
J. Tyndall. 2010. The AACODS checklist.
Michiel Van Genuchten. 1991. Why is software late? An empirical study of reasons for delay in software development. IEEE Trans. Software Eng. IEEE 17, 6, 582-590. DOI: https://doi.org/10.1109/32.87283
Shuang Wang and Jeff Offutt. 2009. Comparison of unit-level automated test generation tools. In Software Testing, Verification and Validation Workshops, 2009. ICSTW'09. International Conference on IEEE, 210-219. DOI: 10.1109/ICSTW.2009.36
Johannes Weiss, Alexander Schill, Ingo Richter and Peter Mandl. 2016. Literature Review of Empirical Research Studies within the Domain of Acceptance Testing. In Software Engineering and Advanced Applications (SEAA), 2016 42th Euromicro Conference on IEEE, 181-188.
Roel Wieringa, Neil Maiden, Nancy Mead and Colette Rolland. 2006. Requirements engineering paper classification and evaluation criteria: a proposal and a discussion. Requirements Engineering. Springer 11, 1, 102-107. doi: 10.1007/s00766-005-0021-6
The F. E. Wikipedia. 2016. Backlink. https://en.wikipedia.org/wiki/Backlink.
Wiktionary contributors. 2016. If all you have is a hammer, everything looks like a nail. https://en.wiktionary.org/wiki/if-all-you-have-is-a-hammer-everything-looks-like-a-nail.
S. Yehezkel. 2016. Test Automation Survey 2016. http://blog.testproject.io/2016/03/16/test-automation-survey-2016/