Standard

How Good Is Your Puppet? An Empirically Defined and Validated Quality Model for Puppet. / Bent, Eduard Van der; Hage, Jurriaan; Visser, Joost; Gousios, Georgios.

Proceedings of the 25th IEEE International Conference on Software Analysis, Evolution and Reengineering. 2018. (SANER 2018).

Research output: Scientific - peer-reviewConference contribution

Harvard

Bent, EVD, Hage, J, Visser, J & Gousios, G 2018, How Good Is Your Puppet? An Empirically Defined and Validated Quality Model for Puppet. in Proceedings of the 25th IEEE International Conference on Software Analysis, Evolution and Reengineering. SANER 2018.

APA

Bent, E. V. D., Hage, J., Visser, J., & Gousios, G. (2018). How Good Is Your Puppet? An Empirically Defined and Validated Quality Model for Puppet. In Proceedings of the 25th IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER 2018).

Vancouver

Bent EVD, Hage J, Visser J, Gousios G. How Good Is Your Puppet? An Empirically Defined and Validated Quality Model for Puppet. In Proceedings of the 25th IEEE International Conference on Software Analysis, Evolution and Reengineering. 2018. (SANER 2018).

Author

Bent, Eduard Van der ; Hage, Jurriaan ; Visser, Joost ; Gousios, Georgios. / How Good Is Your Puppet? An Empirically Defined and Validated Quality Model for Puppet. Proceedings of the 25th IEEE International Conference on Software Analysis, Evolution and Reengineering. 2018. (SANER 2018).

BibTeX

@inbook{0f664a13a4af4ea4b230e69d7bf8f93d,
title = "How Good Is Your Puppet? An Empirically Defined and Validated Quality Model for Puppet",
abstract = "Puppet is a declarative language for configuration management that has rapidly gained popularity in recent years. Numerous organizations now rely on Puppet code for deploying their software systems onto cloud infrastructures. In this paper we provide a definition of code quality for Puppet code and an automated technique for measuring and rating Puppet code quality. To this end, we first explore the notion of code quality as it applies to Puppet code by performing a survey among Puppet developers. Second, we develop a measurement model for the maintainability aspect of Puppet code quality. To arrive at this measurement model, we derive appropriate quality metrics from our survey results and from existing software quality models. We implemented the Puppet code quality model in a software analysis tool. We validate our definition of Puppet code quality and the measurement model by a structured interview with Puppet experts and by comparing the tool results with quality judgments of those experts. The validation shows that the measurement model and tool provide quality judgments of Puppet code that closely match the judgments of experts. Also, the experts deem the model appropriate and usable in practice. The Software Improvement Group (SIG) has started using the model in its consultancy practice.",
author = "Bent, {Eduard Van der} and Jurriaan Hage and Joost Visser and Georgios Gousios",
year = "2018",
month = "3",
series = "SANER 2018",
booktitle = "Proceedings of the 25th IEEE International Conference on Software Analysis, Evolution and Reengineering",

}

RIS

TY - CHAP

T1 - How Good Is Your Puppet? An Empirically Defined and Validated Quality Model for Puppet

AU - Bent,Eduard Van der

AU - Hage,Jurriaan

AU - Visser,Joost

AU - Gousios,Georgios

PY - 2018/3

Y1 - 2018/3

N2 - Puppet is a declarative language for configuration management that has rapidly gained popularity in recent years. Numerous organizations now rely on Puppet code for deploying their software systems onto cloud infrastructures. In this paper we provide a definition of code quality for Puppet code and an automated technique for measuring and rating Puppet code quality. To this end, we first explore the notion of code quality as it applies to Puppet code by performing a survey among Puppet developers. Second, we develop a measurement model for the maintainability aspect of Puppet code quality. To arrive at this measurement model, we derive appropriate quality metrics from our survey results and from existing software quality models. We implemented the Puppet code quality model in a software analysis tool. We validate our definition of Puppet code quality and the measurement model by a structured interview with Puppet experts and by comparing the tool results with quality judgments of those experts. The validation shows that the measurement model and tool provide quality judgments of Puppet code that closely match the judgments of experts. Also, the experts deem the model appropriate and usable in practice. The Software Improvement Group (SIG) has started using the model in its consultancy practice.

AB - Puppet is a declarative language for configuration management that has rapidly gained popularity in recent years. Numerous organizations now rely on Puppet code for deploying their software systems onto cloud infrastructures. In this paper we provide a definition of code quality for Puppet code and an automated technique for measuring and rating Puppet code quality. To this end, we first explore the notion of code quality as it applies to Puppet code by performing a survey among Puppet developers. Second, we develop a measurement model for the maintainability aspect of Puppet code quality. To arrive at this measurement model, we derive appropriate quality metrics from our survey results and from existing software quality models. We implemented the Puppet code quality model in a software analysis tool. We validate our definition of Puppet code quality and the measurement model by a structured interview with Puppet experts and by comparing the tool results with quality judgments of those experts. The validation shows that the measurement model and tool provide quality judgments of Puppet code that closely match the judgments of experts. Also, the experts deem the model appropriate and usable in practice. The Software Improvement Group (SIG) has started using the model in its consultancy practice.

M3 - Conference contribution

T3 - SANER 2018

BT - Proceedings of the 25th IEEE International Conference on Software Analysis, Evolution and Reengineering

ER -

ID: 37386937