• TSE2776152

    Accepted author manuscript, 15 MB, PDF-document


Software testing is one of the key activities to software quality in practice. Despite its importance, however, we have a remarkable lack of knowledge on how developers test in real-world projects. In this paper, we report on the surprising results of a large-scale field study with 2,443 software engineers whose development activities we closely monitored over the course of 2.5 years in four Integrated Development Environments (IDEs). Our findings question several commonly shared assumptions and beliefs about developer testing: half of the developers in our study does not test; developers rarely run their tests in the IDE; only once they start testing, do they do it heftily; most programming sessions end without any test execution; only a quarter of test cases is responsible for three quarters of all test failures; 12% of tests show flaky behavior; Test-Driven Development (TDD) is not widely practiced; and software developers only spend a quarter of their time engineering tests, whereas they think they test half of their time. We observed only minor differences in the testing practices among developers in different IDEs, Java, and C#. We summarize these practices of loosely guiding ones development efforts with the help of testing as Test-Guided Development (TGD).
Original languageEnglish
JournalIEEE Transactions on Software Engineering
StateAccepted/In press - 2018

    Research areas

  • Developer testing, Developer tests, Test-Driven Development, Test-Guided Development

ID: 38319274