Standard

Benchmarking Distributed Stream Data Processing Systems. / Karimov, Jeyhun; Rabl, Tilmann; Katsifodimos, Asterios; Samarev, Roman; Heiskanen, Henri; Markl, Volker.

Proceedings - IEEE 34th International Conference on Data Engineering, ICDE 2018. Piscataway, NJ : IEEE, 2018. p. 1519-1530 8509390.

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Harvard

Karimov, J, Rabl, T, Katsifodimos, A, Samarev, R, Heiskanen, H & Markl, V 2018, Benchmarking Distributed Stream Data Processing Systems. in Proceedings - IEEE 34th International Conference on Data Engineering, ICDE 2018., 8509390, IEEE, Piscataway, NJ, pp. 1519-1530, 34th IEEE International Conference on Data Engineering, ICDE 2018, Paris, France, 16/04/18. https://doi.org/10.1109/ICDE.2018.00169

APA

Karimov, J., Rabl, T., Katsifodimos, A., Samarev, R., Heiskanen, H., & Markl, V. (2018). Benchmarking Distributed Stream Data Processing Systems. In Proceedings - IEEE 34th International Conference on Data Engineering, ICDE 2018 (pp. 1519-1530). [8509390] Piscataway, NJ: IEEE. https://doi.org/10.1109/ICDE.2018.00169

Vancouver

Karimov J, Rabl T, Katsifodimos A, Samarev R, Heiskanen H, Markl V. Benchmarking Distributed Stream Data Processing Systems. In Proceedings - IEEE 34th International Conference on Data Engineering, ICDE 2018. Piscataway, NJ: IEEE. 2018. p. 1519-1530. 8509390 https://doi.org/10.1109/ICDE.2018.00169

Author

Karimov, Jeyhun ; Rabl, Tilmann ; Katsifodimos, Asterios ; Samarev, Roman ; Heiskanen, Henri ; Markl, Volker. / Benchmarking Distributed Stream Data Processing Systems. Proceedings - IEEE 34th International Conference on Data Engineering, ICDE 2018. Piscataway, NJ : IEEE, 2018. pp. 1519-1530

BibTeX

@inproceedings{427b434d6e9b4fd981d849a265cd90ac,
title = "Benchmarking Distributed Stream Data Processing Systems",
abstract = "The need for scalable and efficient stream analysis has led to the development of many open-source streaming data processing systems (SDPSs) with highly diverging capabilities and performance characteristics. While first initiatives try to compare the systems for simple workloads, there is a clear gap of detailed analyses of the systems' performance characteristics. In this paper, we propose a framework for benchmarking distributed stream processing engines. We use our suite to evaluate the performance of three widely used SDPSs in detail, namely Apache Storm, Apache Spark, and Apache Flink. Our evaluation focuses in particular on measuring the throughput and latency of windowed operations, which are the basic type of operations in stream analytics. For this benchmark, we design workloads based on real-life, industrial use-cases inspired by the online gaming industry. The contribution of our work is threefold. First, we give a definition of latency and throughput for stateful operators. Second, we carefully separate the system under test and driver, in order to correctly represent the open world model of typical stream processing deployments and can, therefore, measure system performance under realistic conditions. Third, we build the first benchmarking framework to define and test the sustainable performance of streaming systems. Our detailed evaluation highlights the individual characteristics and use-cases of each system.",
keywords = "Apache Flink, Apache Spark, Apache Storm, Stream benchmark, Stream data processing",
author = "Jeyhun Karimov and Tilmann Rabl and Asterios Katsifodimos and Roman Samarev and Henri Heiskanen and Volker Markl",
year = "2018",
month = "10",
day = "24",
doi = "10.1109/ICDE.2018.00169",
language = "English",
pages = "1519--1530",
booktitle = "Proceedings - IEEE 34th International Conference on Data Engineering, ICDE 2018",
publisher = "IEEE",
address = "United States",

}

RIS

TY - GEN

T1 - Benchmarking Distributed Stream Data Processing Systems

AU - Karimov, Jeyhun

AU - Rabl, Tilmann

AU - Katsifodimos, Asterios

AU - Samarev, Roman

AU - Heiskanen, Henri

AU - Markl, Volker

PY - 2018/10/24

Y1 - 2018/10/24

N2 - The need for scalable and efficient stream analysis has led to the development of many open-source streaming data processing systems (SDPSs) with highly diverging capabilities and performance characteristics. While first initiatives try to compare the systems for simple workloads, there is a clear gap of detailed analyses of the systems' performance characteristics. In this paper, we propose a framework for benchmarking distributed stream processing engines. We use our suite to evaluate the performance of three widely used SDPSs in detail, namely Apache Storm, Apache Spark, and Apache Flink. Our evaluation focuses in particular on measuring the throughput and latency of windowed operations, which are the basic type of operations in stream analytics. For this benchmark, we design workloads based on real-life, industrial use-cases inspired by the online gaming industry. The contribution of our work is threefold. First, we give a definition of latency and throughput for stateful operators. Second, we carefully separate the system under test and driver, in order to correctly represent the open world model of typical stream processing deployments and can, therefore, measure system performance under realistic conditions. Third, we build the first benchmarking framework to define and test the sustainable performance of streaming systems. Our detailed evaluation highlights the individual characteristics and use-cases of each system.

AB - The need for scalable and efficient stream analysis has led to the development of many open-source streaming data processing systems (SDPSs) with highly diverging capabilities and performance characteristics. While first initiatives try to compare the systems for simple workloads, there is a clear gap of detailed analyses of the systems' performance characteristics. In this paper, we propose a framework for benchmarking distributed stream processing engines. We use our suite to evaluate the performance of three widely used SDPSs in detail, namely Apache Storm, Apache Spark, and Apache Flink. Our evaluation focuses in particular on measuring the throughput and latency of windowed operations, which are the basic type of operations in stream analytics. For this benchmark, we design workloads based on real-life, industrial use-cases inspired by the online gaming industry. The contribution of our work is threefold. First, we give a definition of latency and throughput for stateful operators. Second, we carefully separate the system under test and driver, in order to correctly represent the open world model of typical stream processing deployments and can, therefore, measure system performance under realistic conditions. Third, we build the first benchmarking framework to define and test the sustainable performance of streaming systems. Our detailed evaluation highlights the individual characteristics and use-cases of each system.

KW - Apache Flink

KW - Apache Spark

KW - Apache Storm

KW - Stream benchmark

KW - Stream data processing

UR - http://www.scopus.com/inward/record.url?scp=85057125951&partnerID=8YFLogxK

U2 - 10.1109/ICDE.2018.00169

DO - 10.1109/ICDE.2018.00169

M3 - Conference contribution

SP - 1519

EP - 1530

BT - Proceedings - IEEE 34th International Conference on Data Engineering, ICDE 2018

PB - IEEE

CY - Piscataway, NJ

ER -

ID: 47699943