Evaluating and comparing language workbenches: Existing results and benchmarks for the future

Sebastian Erdweg*, Tijs van der Storm, Markus Völter, Laurence Tratt, Remi Bosman, William R. Cook, Albert Gerritsen, Angelo Hulshout, Steven Kelly, Alex Loh, Gabriël Konat, Pedro J. Molina, Martin Palatnik, Risto Pohjonen, Eugen Schindler, Klemens Schindler, Riccardo Solmi, Vlad Vergu, Eelco Visser, Kevin Van Der VlistGuido Wachsmuth, Jimi Van Der Woning

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

137 Citations (Scopus)

Abstract

Language workbenches are environments for simplifying the creation and use of computer languages. The annual Language Workbench Challenge (LWC) was launched in 2011 to allow the many academic and industrial researchers in this area an opportunity to quantitatively and qualitatively compare their approaches. We first describe all four LWCs to date, before focussing on the approaches used, and results generated, during the third LWC. We give various empirical data for ten approaches from the third LWC. We present a generic feature model within which the approaches can be understood and contrasted. Finally, based on our experiences of the existing LWCs, we propose a number of benchmark problems for future LWCs.

Original languageEnglish
Pages (from-to)24-47
Number of pages24
JournalComputer Languages, Systems and Structures
Volume44
DOIs
Publication statusPublished - 1 Dec 2015

Keywords

  • Benchmarks
  • Domain-specific languages
  • Language workbenches
  • Questionnaire language
  • Survey

Fingerprint

Dive into the research topics of 'Evaluating and comparing language workbenches: Existing results and benchmarks for the future'. Together they form a unique fingerprint.

Cite this