Documents

The demand for additional performance due to the rapid increase in the size and importance of data-intensive applications has considerably elevated the complexity of computer architecture. In response, systems typically offer pre-determined behaviors based on heuristics and then expose a large number of configuration parameters for operators to adjust the system to their particular infrastructure. Unfortunately, in practice this leads to a substantial manual tuning effort. In this work, we focus on one of the most impactful tuning decisions in big data systems: the number of executor threads. We first show the impact of I/O contention on the runtime of workloads and a simple static solution to reduce the number of threads for I/O-bound phases. We then present a more elaborate solution in the form of self-adaptive executors which have the ability to continuously monitor the underlying system resources and find out when and where contentions occur. This enables the executors to tune their thread pool size dynamically at runtime in order to achieve the best performance. Our experimental results show that being adaptive can significantly reduce the execution time especially in I/O intensive applications such as Terasort and PageRank which see a 34% and 54% reduction in runtime.
Original languageEnglish
Title of host publicationProceedings of the 20th ACM/IFIP International Middleware Conference
Number of pages13
Publication statusAccepted/In press - 13 Sep 2019
EventACM/IFIP 20th International Middleware Conference - UC Davis, Davis, CA, United States
Duration: 9 Dec 201913 Dec 2019
Conference number: 2019
http://2019.middleware-conference.org/

Conference

ConferenceACM/IFIP 20th International Middleware Conference
Abbreviated titleMiddleware
CountryUnited States
CityDavis, CA
Period9/12/1913/12/19
Internet address

ID: 56883483