Application of Transformers in Software Test Case Prioritization

Date
2022-09
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract

Most automated software testing tasks can benefit from the abstract representation of test cases. Traditionally, this is done by encoding test cases based on their code coverage. Specification-level criteria can replace code coverage to better represent test cases’ behavior, but they are often not cost-effective. In this paper, we hypothesize that execution traces of the test cases can be a good alternative to abstract their behavior for automated testing tasks. We propose a transformer-based embedding approach, Transformer Test2Vec, that maps test execution traces to a latent space. We evaluate this representation in the test case prioritization (TP) task. Our default TP method is based on the similarity of the embedded vectors to historical failing test vectors. We also study an alternative based on the diversity of test vectors. Finally, we propose a method to decide which TP to choose, for a given test suite. The experiment is based on several real and seeded faults with over a million execution traces. Results show that our proposed TP improves the best embedding alternative by 40.62% in terms of the median normalized rank of the first failing test case (FFR). It outperforms traditional code coverage-based approaches by 20.72% and 72.59% in terms of median APFD and median normalized FFR.

Description
Keywords
Software Engineering, Machine Learning, Test Case Prioritization, Transformers
Citation
Jabbar, E. (2022). Application of transformers in software test case prioritization (Master's thesis, University of Calgary, Calgary, Canada). Retrieved from https://prism.ucalgary.ca.