560
Views
52
Downloads
0
Crossref
N/A
WoS
0
Scopus
N/A
CSCD
Automatically generating test cases by evolutionary algorithms to satisfy the path coverage criterion has attracted much research attention in software testing. In the context of generating test cases to cover many target paths, the efficiency of existing methods needs to be further improved when infeasible or difficult paths exist in the program under test. This is because a significant amount of the search budget (i.e., time allocated for the search to run) is consumed when computing fitness evaluations of individuals on infeasible or difficult paths. In this work, we present a feedback-directed mechanism that temporarily removes groups of paths from the target paths when no improvement is observed for these paths in subsequent generations. To fulfill this task, our strategy first organizes paths into groups. Then, in each generation, the objective scores of each individual for all paths in each group are summed up. For each group, the lowest value of the summed up objective scores among all individuals is assigned as the best aggregated score for a group. A group is removed when no improvement is observed in its best aggregated score over the last two generations. The experimental results show that the proposed approach can significantly improve path coverage rates for programs under test with infeasible or difficult paths in case of a limited search budget. In particular, the feedback-directed mechanism reduces wasting the search budget on infeasible paths or on difficult target paths that require many fitness evaluations before getting an improvement.
Automatically generating test cases by evolutionary algorithms to satisfy the path coverage criterion has attracted much research attention in software testing. In the context of generating test cases to cover many target paths, the efficiency of existing methods needs to be further improved when infeasible or difficult paths exist in the program under test. This is because a significant amount of the search budget (i.e., time allocated for the search to run) is consumed when computing fitness evaluations of individuals on infeasible or difficult paths. In this work, we present a feedback-directed mechanism that temporarily removes groups of paths from the target paths when no improvement is observed for these paths in subsequent generations. To fulfill this task, our strategy first organizes paths into groups. Then, in each generation, the objective scores of each individual for all paths in each group are summed up. For each group, the lowest value of the summed up objective scores among all individuals is assigned as the best aggregated score for a group. A group is removed when no improvement is observed in its best aggregated score over the last two generations. The experimental results show that the proposed approach can significantly improve path coverage rates for programs under test with infeasible or difficult paths in case of a limited search budget. In particular, the feedback-directed mechanism reduces wasting the search budget on infeasible paths or on difficult target paths that require many fitness evaluations before getting an improvement.
A. Panichella, F. M. Kifetew, and P. Tonella, Automated test case generation as a many-objective optimisation problem with dynamic selection of the targets, IEEE Trans. Software Eng., vol. 44, no. 2, pp. 122–158, 2018.
G. Fraser and A. Arcuri, Whole test suite generation, IEEE Trans. Software Eng., vol. 39, no. 2, pp. 276–291, 2013.
J. R. Horgan, S. London, and M. R. Lyu, Achieving software quality with testing coverage measures, Computer, vol. 27, no. 9, pp. 60–69, 1994.
H. Huang, F. Liu, Z. Yang, and Z. Hao, Automated test case generation based on differential evolution with relationship matrix for iFogSim toolkit, IEEE Trans. Ind. Inf., vol. 14, no. 11, pp. 5005–5016, 2018.
J. Wegener, A. Baresel, and H. Sthamer, Evolutionary test environment for automatic structural testing, Inf. Software Technol., vol. 43, no. 14, pp. 841–854, 2001.
D. J. Mala, V. Mohan, and M. Kamalapriya, Automated software test optimisation framework—an artificial bee colony optimisation-based approach, IET Software, vol. 4, no. 5, pp. 334–348, 2010.
J. C. Lin and P. L. Yeh, Automatic test data generation for path testing using GAs, Inf. Sci., vol. 131, nos. 1–4, pp. 47–64, 2001.
D. Gong, W. Zhang, and X. Yao, Evolutionary generation of test data for many paths coverage based on grouping, J. Syst. Software, vol. 84, no. 12, pp. 2222–2233, 2011.
M. A. Ahmed and I. Hermadi, GA-based multiple paths test data generator, Comput. Oper. Res., vol. 35, no. 10, pp. 3107–3124, 2008.
R. E. Prather and J. P. Myers, The path prefix software testing strategy, IEEE Trans. Software Eng., vol. SE-13, no. 7, pp. 761–766, 1987.
L. A. Clarke, A system to generate test data and symbolically execute programs, IEEE Trans. Software Eng., vol. SE-2, no. 3, pp. 215–222, 1976.
K. Sen, D. Marinov, and G. Agha, CUTE: A concolic unit testing engine for C, ACM SIGSOFT Software Eng. Notes, vol. 30, no. 5, pp. 263–272, 2005.
P. Godefroid, N. Klarlund, and K. Sen, DART: Directed automated random testing, ACM SIGPLAN Not., vol. 40, no. 6, pp. 213–223, 2005.
M. Harman, Software engineering meets evolutionary computation, Computer, vol. 44, no. 10, pp. 31–39, 2011.
R. Malhotra and M. Khari, Heuristic search-based approach for automated test data generation: A survey, Int. J. Bio-Inspired Comput., vol. 5, no. 1, pp. 1–18, 2013.
M. Khari and P. Kumar, An extensive evaluation of search-based software testing: A review, Soft Comput., vol. 23, no. 6, pp. 1933–1946, 2019.
N. Zhang, B. Wu, and X. Bao, Automatic generation of test cases based on multi-population genetic algorithm, Int. J. Multimedia Ubiquitous Eng., vol. 10, no. 6, pp. 113–122, 2015.
J. Campos, Y. Ge, N. Albunian, G. Fraser, M. Eler, and A. Arcuri, An empirical evaluation of evolutionary algorithms for unit test suite generation, Inf. Software Technol., vol. 104, pp. 207–235, 2018.
A. Panichella, F. M. Kifetew, and P. Tonella, A large scale empirical comparison of state-of-the-art search-based test case generators, Inf. Software Technol., vol. 104, pp. 236–256, 2018.
R. Storn and K. Price, Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces, J. Global Optim., vol. 11, no. 4, pp. 341–359, 1997.
C. C. Michael, G. McGraw, and M. A. Schatz, Generating software test data by evolution, IEEE Trans. Software Eng., vol. 27, no. 12, pp. 1085–1110, 2001.
B. Korel, Automated software test data generation, IEEE Trans. Software Eng., vol. 16, no. 8, pp. 870–879, 1990.
K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, A fast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Trans. Evol. Comput., vol. 6, no. 2, pp. 182–197, 2002.
H. Wang and X. Yao, Corner sort for Pareto-based many-objective optimization, IEEE Trans. Cybern., vol. 44, no. 1, pp. 92–102, 2014.
J. M. Rojas, M. Vivanti, A. Arcuri, and G. Fraser, A detailed investigation of the effectiveness of whole test suite generation, Empirical Software Eng., vol. 22, no. 2, pp. 852–893, 2017.
H. Do, S. Elbaum, and G. Rothermel, Supporting controlled experimentation with testing techniques: An infrastructure and its potential impact, Empirical Software Eng., vol. 10, no. 4, pp. 405–435, 2005.
G Fraser and A Arcuri, A large-scale evaluation of automated unit test generation using EvoSuite, ACM Trans. Software Eng. Methodol., vol. 24, no. 2, p. 8, 2014.
A. Vargha and H. D. Delaney, A critique and improvement of the CL common language effect size statistics of McGraw and Wong, J. Educ. Behav. Stat., vol. 25, no. 2, pp. 101–132, 2000.
This work was supported by the National Natural Science Foundation of China (No. 61876207), the Natural Science Foundation of Guangdong Province (No. 2022A1515011491), and the Fundamental Research Funds for the Central Universities (No. 2020ZYGXZR014).
The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).