Bug prediction is aimed at supporting developers in the identification of code artifacts more likely to be defective. Researchers have proposed prediction models to identify bug prone methods and provided promising evidence that it is possible to operate at this level of granularity. Particularly, models based on a mixture of product and process metrics, used as independent variables, led to the best results.
In this study, we first replicate previous research on method- level bug prediction on different systems/timespans. Afterwards, we reflect on the evaluation strategy and propose a more realistic one. Key results of our study show that the performance of the method-level bug prediction model is similar to what previously reported also for different systems/timespans, when evaluated with the same strategy. However—when evaluated with a more realistic strategy—all the models show a dramatic drop in performance exhibiting results close to that of a random classifier. Our replication and negative results indicate that method-level bug prediction is still an open challenge.
Original languageEnglish
Title of host publication2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering, SANER 2018
StateAccepted/In press - 2018
EventIEEE International Conference on Software Analysis, Evolution and Reengineering - Campobasso, Italy


ConferenceIEEE International Conference on Software Analysis, Evolution and Reengineering
Abbreviated titleSANER
Internet address

    Research areas

  • empirical software engineering; bug prediction; replication; negative results

ID: 39465054