DOI

Peer code review is a practice widely adopted in software projects to improve the quality of code. In current code review practices, code changes are manually inspected by developers other than the author before these changes are integrated into a project or put into production. We conducted a study to obtain an empirical understanding of what makes a code change easier to review. To this end, we surveyed published academic literature and sources from gray literature (e.g., blogs and white papers), we interviewed ten professional developers, and we designed and deployed a reviewability evaluation tool that professional developers used to rate the reviewability of 98 changes. We find that reviewability is defined through several factors, such as the change description, size, and coherent commit history. We provide recommendations for practitioners and researchers. Preprint [https://pure.tudelft.nl/portal/files/45941832/reviewability.pdf]. Data and Materials [https://doi.org/10.5281/zenodo.1323659].
Original languageEnglish
Title of host publicationESEC/FSE 2018
Subtitle of host publicationProceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering
Place of PublicationNew York, NY
PublisherAssociation for Computing Machinery (ACM)
Pages201-212
Number of pages12
ISBN (Print)978-1-4503-5573-5
DOIs
Publication statusPublished - 2018
EventESEC/FSE 2018 : The 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering - Lake Buena Vista, United States
Duration: 4 Nov 20189 Nov 2018
Conference number: 26th

Conference

ConferenceESEC/FSE 2018
CountryUnited States
CityLake Buena Vista
Period4/11/189/11/18

    Research areas

  • Code quality, code review, pull request

ID: 45941662