Documents

  • PID4080971

    Accepted author manuscript, 937 KB, PDF-document

DOI

Automated test generation tools have been widely investigated with the goal of reducing the cost of testing activities. However, generated tests have been shown not to help developers in detecting and finding more bugs even though they reach higher structural coverage compared to manual testing. The main reason is that generated tests are diff-cult to understand and maintain. Our paper proposes an approach, coined TestDescriber, which automatically generates test case summaries of the portion of code exercised by each individual test, thereby improving understandability. We argue that this approach can complement the current techniques around automated unit test generation or searchbased techniques designed to generate a possibly minimal set of test cases. In evaluating our approach we found that (1) developers find twice as many bugs, and (2) test case summaries significantly improve the comprehensibility of test cases, which is considered particularly useful by developers.

Original languageEnglish
Title of host publicationProceedings - 2016 IEEE/ACM 38th IEEE International Conference on Software Engineering Companion, ICSE 2016
PublisherIEEE Computer Society
Pages547-558
Number of pages12
ISBN (Electronic)978-1-4503-3900-1, 978-1-4503-4205-6
DOIs
StatePublished - 14 May 2016
Event2016 IEEE/ACM 38th IEEE International Conference on Software Engineering, ICSE 2016 - Austin, United States

Conference

Conference2016 IEEE/ACM 38th IEEE International Conference on Software Engineering, ICSE 2016
CountryUnited States
CityAustin
Period14/05/1622/05/16

    Research areas

  • Empirical Study, Software testing, Test Case Summarization

ID: 8927920