This tool is designed for running an empirical evaluation for EvoSuite model seeding. It can execute EvoSuite in three different modes:
- no-seeding: Run EvoSuite without any call-sequence seeding.
- test_seeding: Run EvoSuite with seeding call-sequences which are derived from the execution tests.
- model_seeding: Run EvoSuite with seeding call sequences from the models inferring from the static analysis on the source code of software under test and dynamic analysis on the existing test cases.
This tool needs three inputs:
-
binary files: the compiled packages of the software under test should be located in
bins/[Project_name]
. -
models for model_seeding: the generated models by botsing model generation. They should all be located in directory
analysis-result/[Project_name]/models
. -
related test list for test_seeding: the generated xml files containing the related tests. These xml files are generated by botsin model generation as well. They should all be located in directory
analysis-result/[Project_name]/carvedTests
. -
classes.csv file: This CSV file contains the list of the classes in the target projects that we want to generate unit tests for them. The first column of this CSV file is the name of the target class, and the second one indicates the project name.
With docker:
. docker-run.sh <Rounds> <Maximum_java_processes>
Without docker:
. main.sh <Rounds> <List_of_classes> <Maximum_java_processes> <search_budget> <population> <seeding_probabilities>
With docker:
. docker-run.sh -t <Rounds> <Maximum_java_processes>
Without docker:
. main.sh -t <Rounds> <List_of_classes> <Maximum_java_processes> <search_budget> <population> <seeding_probabilities>
With docker:
. docker-run.sh -m [-r] <Rounds> <Maximum_java_processes> <object_pool_usage_probabilities> <seeding_probabilities>
Without docker:
. main.sh -m [-r] <Rounds> <List_of_classes> <Maximum_java_processes> <search_budget> <population> <object_pool_usage_probabilities> <seeding_probabilities>
indicates the number of times that we want to repeat each experiment. The minimum value for the empirical evaluation should be 10. <List_of_classes> points to classes.csv
file. <Maximum_java_processes> Limits the number of parallel java processes. <search_budget> is the number of seconds that EvoSuite has for each test generation execution. is the number of test individuals in each generation of the search process. Finally, <seeding_probabilities> is comma separated probabilities for seed_clone.
Note #1: Each EvoSuite execution launches two java processes (Master & Client). So, the number of parallel evosuite instances will be half of <Maximum_java_processes>
.
The valuable data about the result of the execution save in results/
directory csv files. The generated tests will be stored in generated_tests/<no_seeding | test_seeding | model_seeding>/<project_name>-<target_class>-<clone_seed_probability>-<id_of_round>
.
After generating tests, you just need to run run_pitest.sh
file in the root directory. This script, first, finds all of the existing generated test cases. Then, it removes all of the flaky tests from the generated tests (it runs each of the test cases 5 times. If any test fails even one time, it will be ignored). Finally, it applies PIT on the existing test cases.
The PIT report is saved in the following directory:
pitest/out/<no_seeding | test_seeding | model_seeding>/<project_name>-<target_class>-<clone_seed_probability>-<id_of_round>