Journal Home > Volume 3 , Issue 1

Automatically generating test cases by evolutionary algorithms to satisfy the path coverage criterion has attracted much research attention in software testing. In the context of generating test cases to cover many target paths, the efficiency of existing methods needs to be further improved when infeasible or difficult paths exist in the program under test. This is because a significant amount of the search budget (i.e., time allocated for the search to run) is consumed when computing fitness evaluations of individuals on infeasible or difficult paths. In this work, we present a feedback-directed mechanism that temporarily removes groups of paths from the target paths when no improvement is observed for these paths in subsequent generations. To fulfill this task, our strategy first organizes paths into groups. Then, in each generation, the objective scores of each individual for all paths in each group are summed up. For each group, the lowest value of the summed up objective scores among all individuals is assigned as the best aggregated score for a group. A group is removed when no improvement is observed in its best aggregated score over the last two generations. The experimental results show that the proposed approach can significantly improve path coverage rates for programs under test with infeasible or difficult paths in case of a limited search budget. In particular, the feedback-directed mechanism reduces wasting the search budget on infeasible paths or on difficult target paths that require many fitness evaluations before getting an improvement.


menu
Abstract
Full text
Outline
About this article

Search-Based Software Test Data Generation for Path Coverage Based on a Feedback-Directed Mechanism

Show Author's information Stuart Dereck Semujju1Han Huang1Fangqing Liu1( )Yi Xiang1Zhifeng Hao2
School of Software Engineering, South China University of Technology, Guangzhou 510006, China
College of Science, Shantou University, Shantou 515063, China

Abstract

Automatically generating test cases by evolutionary algorithms to satisfy the path coverage criterion has attracted much research attention in software testing. In the context of generating test cases to cover many target paths, the efficiency of existing methods needs to be further improved when infeasible or difficult paths exist in the program under test. This is because a significant amount of the search budget (i.e., time allocated for the search to run) is consumed when computing fitness evaluations of individuals on infeasible or difficult paths. In this work, we present a feedback-directed mechanism that temporarily removes groups of paths from the target paths when no improvement is observed for these paths in subsequent generations. To fulfill this task, our strategy first organizes paths into groups. Then, in each generation, the objective scores of each individual for all paths in each group are summed up. For each group, the lowest value of the summed up objective scores among all individuals is assigned as the best aggregated score for a group. A group is removed when no improvement is observed in its best aggregated score over the last two generations. The experimental results show that the proposed approach can significantly improve path coverage rates for programs under test with infeasible or difficult paths in case of a limited search budget. In particular, the feedback-directed mechanism reduces wasting the search budget on infeasible paths or on difficult target paths that require many fitness evaluations before getting an improvement.

Keywords: many-objective optimization, automated test case generation, software testing, path coverage

References(39)

[1]

A. Panichella, F. M. Kifetew, and P. Tonella, Automated test case generation as a many-objective optimisation problem with dynamic selection of the targets, IEEE Trans. Software Eng., vol. 44, no. 2, pp. 122–158, 2018.

[2]

G. Fraser and A. Arcuri, Whole test suite generation, IEEE Trans. Software Eng., vol. 39, no. 2, pp. 276–291, 2013.

[3]
S. Scalabrino, G. Grano, D. Di Nucci, R. Oliveto, and A. De Lucia, Search-based testing of procedural programs: Iterative single-target or multi-target approach? in Proc. 8th Int. Symp. Search Based Software Engineering, Raleigh, NC, USA, 2016, pp. 64–79.
DOI
[4]

J. R. Horgan, S. London, and M. R. Lyu, Achieving software quality with testing coverage measures, Computer, vol. 27, no. 9, pp. 60–69, 1994.

[5]

H. Huang, F. Liu, Z. Yang, and Z. Hao, Automated test case generation based on differential evolution with relationship matrix for iFogSim toolkit, IEEE Trans. Ind. Inf., vol. 14, no. 11, pp. 5005–5016, 2018.

[6]

J. Wegener, A. Baresel, and H. Sthamer, Evolutionary test environment for automatic structural testing, Inf. Software Technol., vol. 43, no. 14, pp. 841–854, 2001.

[7]

D. J. Mala, V. Mohan, and M. Kamalapriya, Automated software test optimisation framework—an artificial bee colony optimisation-based approach, IET Software, vol. 4, no. 5, pp. 334–348, 2010.

[8]

J. C. Lin and P. L. Yeh, Automatic test data generation for path testing using GAs, Inf. Sci., vol. 131, nos. 1–4, pp. 47–64, 2001.

[9]

D. Gong, W. Zhang, and X. Yao, Evolutionary generation of test data for many paths coverage based on grouping, J. Syst. Software, vol. 84, no. 12, pp. 2222–2233, 2011.

[10]

M. A. Ahmed and I. Hermadi, GA-based multiple paths test data generator, Comput. Oper. Res., vol. 35, no. 10, pp. 3107–3124, 2008.

[11]

R. E. Prather and J. P. Myers, The path prefix software testing strategy, IEEE Trans. Software Eng., vol. SE-13, no. 7, pp. 761–766, 1987.

[12]

L. A. Clarke, A system to generate test data and symbolically execute programs, IEEE Trans. Software Eng., vol. SE-2, no. 3, pp. 215–222, 1976.

[13]
N. Tillmann and J. De Halleux, Pex-white box test generation for. NET, in Proc. Second Int. Conf. Tests and Proofs, Prato, Italy, 2008, pp. 134–153.
DOI
[14]

K. Sen, D. Marinov, and G. Agha, CUTE: A concolic unit testing engine for C, ACM SIGSOFT Software Eng. Notes, vol. 30, no. 5, pp. 263–272, 2005.

[15]

P. Godefroid, N. Klarlund, and K. Sen, DART: Directed automated random testing, ACM SIGPLAN Not., vol. 40, no. 6, pp. 213–223, 2005.

[16]
M. M. Eler, A. T. Endo, and V. H. S. Durelli, Quantifying the characteristics of Java programs that may influence symbolic execution from a test data generation perspective, in Proc. IEEE 38th Annu. Computer Software and Applications Conf., Vasteras, Sweden, 2014, pp. 181–190.
DOI
[17]

M. Harman, Software engineering meets evolutionary computation, Computer, vol. 44, no. 10, pp. 31–39, 2011.

[18]

R. Malhotra and M. Khari, Heuristic search-based approach for automated test data generation: A survey, Int. J. Bio-Inspired Comput., vol. 5, no. 1, pp. 1–18, 2013.

[19]

M. Khari and P. Kumar, An extensive evaluation of search-based software testing: A review, Soft Comput., vol. 23, no. 6, pp. 1933–1946, 2019.

[20]
S. Shamshiri, J. M. Rojas, G. Fraser, and P. McMinn, andom or genetic algorithm search for object-oriented test suite generation, in Proc. Annu. Conf. Genetic Evol. Comput., Madrid, Spain, 2015, pp. 1367–1374
DOI
[21]
J. M. Rojas, J. Campos, M. Vivanti, G. Fraser, and A. Arcuri, Combining multiple coverage criteria in search-based unit test generation, in Proc. 7th Int. Symp. Search Based Software Engineering, Bergamo, Italy, 2015, pp. 93–108.
DOI
[22]
A. Bouchachia, An immune genetic algorithm for software test data generation, in Proc. 7th Int. Conf. Hybrid Intelligent Systems, Kaiserslautern, Germany, 2007, pp. 84–89.
[23]

N. Zhang, B. Wu, and X. Bao, Automatic generation of test cases based on multi-population genetic algorithm, Int. J. Multimedia Ubiquitous Eng., vol. 10, no. 6, pp. 113–122, 2015.

[24]
A. Arcuri, M. Z. Iqbal, and L. Briand, Formal analysis of the effectiveness and predictability of random testing, in Proc. 19th Int. Symp. Software Testing and Analysis, Trento, Italy, 2010, pp. 219–230.
DOI
[25]

J. Campos, Y. Ge, N. Albunian, G. Fraser, M. Eler, and A. Arcuri, An empirical evaluation of evolutionary algorithms for unit test suite generation, Inf. Software Technol., vol. 104, pp. 207–235, 2018.

[26]

A. Panichella, F. M. Kifetew, and P. Tonella, A large scale empirical comparison of state-of-the-art search-based test case generators, Inf. Software Technol., vol. 104, pp. 236–256, 2018.

[27]

R. Storn and K. Price, Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces, J. Global Optim., vol. 11, no. 4, pp. 341–359, 1997.

[28]
J. H. Holland, Adaptation in Natural and Artificial Systems. Ann Arbor, MI, USA: University of Michigan Press, 1975.
[29]

C. C. Michael, G. McGraw, and M. A. Schatz, Generating software test data by evolution, IEEE Trans. Software Eng., vol. 27, no. 12, pp. 1085–1110, 2001.

[30]

B. Korel, Automated software test data generation, IEEE Trans. Software Eng., vol. 16, no. 8, pp. 870–879, 1990.

[31]
A. Arcuri, It does matter how you normalise the branch distance in search based software testing, in Proc. 3rd Int. Conf. Software Testing, Verification and Validation, Paris, France, 2010, pp. 205–214.
DOI
[32]

K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, A fast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Trans. Evol. Comput., vol. 6, no. 2, pp. 182–197, 2002.

[33]

H. Wang and X. Yao, Corner sort for Pareto-based many-objective optimization, IEEE Trans. Cybern., vol. 44, no. 1, pp. 92–102, 2014.

[34]
A. Panichella, F. M. Kifetew, and P. Tonella, Reformulating branch coverage as a many-objective optimization problem, in Proc. IEEE 8th Int. Conf. Software Testing, Verification and Validation, Graz, Austria, 2015, pp. 1–10.
DOI
[35]

J. M. Rojas, M. Vivanti, A. Arcuri, and G. Fraser, A detailed investigation of the effectiveness of whole test suite generation, Empirical Software Eng., vol. 22, no. 2, pp. 852–893, 2017.

[36]

H. Do, S. Elbaum, and G. Rothermel, Supporting controlled experimentation with testing techniques: An infrastructure and its potential impact, Empirical Software Eng., vol. 10, no. 4, pp. 405–435, 2005.

[37]

G Fraser and A Arcuri, A large-scale evaluation of automated unit test generation using EvoSuite, ACM Trans. Software Eng. Methodol., vol. 24, no. 2, p. 8, 2014.

[38]
W. J. Conover, Practical Nonparametric Statistics., 3rd ed. Hoboken, NJ, USA: Wiley, 1998.
[39]

A. Vargha and H. D. Delaney, A critique and improvement of the CL common language effect size statistics of McGraw and Wong, J. Educ. Behav. Stat., vol. 25, no. 2, pp. 101–132, 2000.

Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 31 August 2022
Revised: 02 December 2022
Accepted: 08 December 2022
Published: 09 March 2023
Issue date: March 2023

Copyright

© The author(s) 2023

Acknowledgements

Acknowledgment

This work was supported by the National Natural Science Foundation of China (No. 61876207), the Natural Science Foundation of Guangdong Province (No. 2022A1515011491), and the Fundamental Research Funds for the Central Universities (No. 2020ZYGXZR014).

Rights and permissions

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return