An Intermediate Representation for Optimizing Machine Learning Pipelines

Andreas Kunft, Asterios Katsifodimos, Sebastian Schelter, Sebastian Bress, Tilmann Rabl, Volker Markl

Research output: Contribution to journalConference articleScientificpeer-review

23 Citations (Scopus)
96 Downloads (Pure)

Abstract

Machine learning (ML) pipelines for model training and validation typically include preprocessing, such as data cleaning and feature engineering, prior to training an ML model. Preprocessing combines relational algebra and user-defined functions (UDFs), while model training uses iterations and linear algebra. Current systems are tailored to either of the two. As a consequence, preprocessing and ML steps are optimized in isolation. To enable holistic optimization of ML training pipelines, we present Lara, a declarative domainspecific language for collections and matrices. Lara's intermediate representation (IR) re ects on the complete program, i.e., UDFs, control ow, and both data types. Two views on the IR enable diverse optimizations. Monads enable operator pushdown and fusion across type and loop boundaries. Combinators provide the semantics of domainspecific operators and optimize data access and cross-validation of ML algorithms. Our experiments on preprocessing pipelines and selected ML algorithms show the effects of our proposed optimizations on dense and sparse data, which achieve speedups of up to an order of magnitude.

Original languageEnglish
Pages (from-to)1553-1567
Number of pages15
JournalProceedings of the VLDB Endowment
Volume12
Issue number11
DOIs
Publication statusPublished - Jul 2019

Fingerprint

Dive into the research topics of 'An Intermediate Representation for Optimizing Machine Learning Pipelines'. Together they form a unique fingerprint.

Cite this