Standard

Re-evaluating Method-Level Bug Prediction. / Pascarella, Luca; Palomba, Fabio; Bacchelli, Alberto.

25th IEEE International Conference on Software Analysis, Evolution and Reengineering, SANER 2018. Piscataway, NJ : IEEE, 2018. p. 1-10.

Research output: Scientific - peer-reviewConference contribution

Harvard

Pascarella, L, Palomba, F & Bacchelli, A 2018, Re-evaluating Method-Level Bug Prediction. in 25th IEEE International Conference on Software Analysis, Evolution and Reengineering, SANER 2018. IEEE, Piscataway, NJ, pp. 1-10, SANER 2018, Campobasso, Italy, 20/02/18. DOI: 10.1109/SANER.2018.8330264

APA

Pascarella, L., Palomba, F., & Bacchelli, A. (2018). Re-evaluating Method-Level Bug Prediction. In 25th IEEE International Conference on Software Analysis, Evolution and Reengineering, SANER 2018 (pp. 1-10). Piscataway, NJ: IEEE. DOI: 10.1109/SANER.2018.8330264

Vancouver

Pascarella L, Palomba F, Bacchelli A. Re-evaluating Method-Level Bug Prediction. In 25th IEEE International Conference on Software Analysis, Evolution and Reengineering, SANER 2018. Piscataway, NJ: IEEE. 2018. p. 1-10. Available from, DOI: 10.1109/SANER.2018.8330264

Author

Pascarella, Luca ; Palomba, Fabio ; Bacchelli, Alberto. / Re-evaluating Method-Level Bug Prediction. 25th IEEE International Conference on Software Analysis, Evolution and Reengineering, SANER 2018. Piscataway, NJ : IEEE, 2018. pp. 1-10

BibTeX

@inbook{eaf8f53d47f6445b9548d434939c8d4a,
title = "Re-evaluating Method-Level Bug Prediction",
abstract = "Bug prediction is aimed at supporting developers in the identification of code artifacts more likely to be defective. Researchers have proposed prediction models to identify bug prone methods and provided promising evidence that it is possible to operate at this level of granularity. Particularly, models based on a mixture of product and process metrics, used as independent variables, led to the best results.In this study, we first replicate previous research on method- level bug prediction on different systems/timespans. Afterwards, we reflect on the evaluation strategy and propose a more realistic one. Key results of our study show that the performance of the method-level bug prediction model is similar to what previously reported also for different systems/timespans, when evaluated with the same strategy. However—when evaluated with a more realistic strategy—all the models show a dramatic drop in performance exhibiting results close to that of a random classifier. Our replication and negative results indicate that method-level bug prediction is still an open challenge.",
keywords = "empirical software engineering, bug prediction, replication, negative results",
author = "Luca Pascarella and Fabio Palomba and Alberto Bacchelli",
year = "2018",
doi = "10.1109/SANER.2018.8330264",
pages = "1--10",
booktitle = "25th IEEE International Conference on Software Analysis, Evolution and Reengineering, SANER 2018",
publisher = "IEEE",
address = "United States",

}

RIS

TY - CHAP

T1 - Re-evaluating Method-Level Bug Prediction

AU - Pascarella,Luca

AU - Palomba,Fabio

AU - Bacchelli,Alberto

PY - 2018

Y1 - 2018

N2 - Bug prediction is aimed at supporting developers in the identification of code artifacts more likely to be defective. Researchers have proposed prediction models to identify bug prone methods and provided promising evidence that it is possible to operate at this level of granularity. Particularly, models based on a mixture of product and process metrics, used as independent variables, led to the best results.In this study, we first replicate previous research on method- level bug prediction on different systems/timespans. Afterwards, we reflect on the evaluation strategy and propose a more realistic one. Key results of our study show that the performance of the method-level bug prediction model is similar to what previously reported also for different systems/timespans, when evaluated with the same strategy. However—when evaluated with a more realistic strategy—all the models show a dramatic drop in performance exhibiting results close to that of a random classifier. Our replication and negative results indicate that method-level bug prediction is still an open challenge.

AB - Bug prediction is aimed at supporting developers in the identification of code artifacts more likely to be defective. Researchers have proposed prediction models to identify bug prone methods and provided promising evidence that it is possible to operate at this level of granularity. Particularly, models based on a mixture of product and process metrics, used as independent variables, led to the best results.In this study, we first replicate previous research on method- level bug prediction on different systems/timespans. Afterwards, we reflect on the evaluation strategy and propose a more realistic one. Key results of our study show that the performance of the method-level bug prediction model is similar to what previously reported also for different systems/timespans, when evaluated with the same strategy. However—when evaluated with a more realistic strategy—all the models show a dramatic drop in performance exhibiting results close to that of a random classifier. Our replication and negative results indicate that method-level bug prediction is still an open challenge.

KW - empirical software engineering

KW - bug prediction

KW - replication

KW - negative results

UR - http://resolver.tudelft.nl/uuid:eaf8f53d-47f6-445b-9548-d434939c8d4a

U2 - 10.1109/SANER.2018.8330264

DO - 10.1109/SANER.2018.8330264

M3 - Conference contribution

SP - 1

EP - 10

BT - 25th IEEE International Conference on Software Analysis, Evolution and Reengineering, SANER 2018

PB - IEEE

ER -

ID: 39465054