DOI

We report on the advances in this sixth edition of the JUnit tool competitions. This year the contest introduces new benchmarks to assess the performance of JUnit testing tools on different types of real-world software projects. Following on the statistical analyses from the past contest work, we have extended it with the combined tools performance aiming to beat the human made tests. Overall, the 6th competition evaluates four automated JUnit testing tools taking as baseline human written test cases for the selected benchmark projects. The paper details the modifications performed to the methodology and provides full results of the competition.

Original languageEnglish
Title of host publicationSBST'18
Subtitle of host publicationProceedings of the 11th International Workshop on Search-Based Software Testing
EditorsAlessandra Gorla , Juan Pablo Galeotti
Place of PublicationPiscataway, NY, USA
PublisherAssociation for Computing Machinery (ACM)
Pages22-29
Number of pages8
ISBN (Electronic)978-1-4503-5741-8
DOIs
Publication statusPublished - 2018
Event11th ACM/IEEE International Workshop on Search-Based Software Testing, SBST 2018, co-located with the 40th International Conference on Software Engineering, ICSE 2018 - Gothenburg, Sweden
Duration: 28 May 201829 May 2018

Conference

Conference11th ACM/IEEE International Workshop on Search-Based Software Testing, SBST 2018, co-located with the 40th International Conference on Software Engineering, ICSE 2018
CountrySweden
CityGothenburg
Period28/05/1829/05/18

    Research areas

  • automation, benchmark, combined performance, Java, mutation testing, statistical analysis, tool competition, unit testing

ID: 54191783