DOI

  • Robert Birke
  • Juan F. Perez
  • Sonia Ben Mokhtar
  • Navaneeth Rameshan
  • Lydia Y. Chen

It is challenging for key-value data stores to trim user (tail) latency of requests as the workloads are observed to have skewed number of key-value pairs and commonly retrieved via multiget operation, i.e., all keys at the same time. In this paper we present Chisel, a novel client side solution to efficiently reshape the query size at the data store by adaptively splitting big requests into chunks to reap the benefits of parallelism and merge small requests into a single query to amortize latency overheads per request. We derive a novel layered queueing model that can quickly and approximately steer the decisions of Chisel. We extensively evaluate Chisel on memcached clusters hosted on a testbed, across a large number of scenarios with different workloads and system configurations. Our evaluation results show that Chisel can overturn the inherent high variability of requests into a judicious operational region, showcasing significant gains for the mean and 95th percentile of user perceived latency, compared to the state-of-art query processing policy.

Original languageEnglish
Title of host publicationProceedings - 2019 IEEE International Conference on Autonomic Computing, ICAC 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages42-51
Number of pages10
ISBN (Electronic)9781728124117
DOIs
Publication statusPublished - 1 Jun 2019
Event16th IEEE International Conference on Autonomic Computing, ICAC 2019 - Umea, Sweden
Duration: 16 Jun 201920 Jun 2019

Conference

Conference16th IEEE International Conference on Autonomic Computing, ICAC 2019
CountrySweden
CityUmea
Period16/06/1920/06/19

    Research areas

  • key value stores, split merge latency model

ID: 62454390