DOI

In real-world NoSQL deployments, users have to trade off CPU, memory, I/O bandwidth and storage space to achieve the required performance and efficiency goals. Data compression is a vital component to improve storage space efficiency, but reading compressed data increases response time. Therefore, compressed data stores rely heavily on using the memory as a cache to speed up read operations. However, as large DRAM capacity is expensive, NoSQL databases have become costly to deploy and hard to scale. In our work, we present a persistent caching mechanism for Apache Cassandra on a high-throughput, low-latency FPGA-based NVMe Flash accelerator (CAPI-Flash), replacing Cassandra's in-memory cache. Because flash is dramatically less expensive per byte than DRAM, our caching mechanism provides Apache Cassandra with access to a large caching layer at lower cost. The experimental results show that for read-intensive workloads, our caching layer provides up to 85% improved throughput and also reduces CPU usage by 25% compared to default Cassandra.
Original languageEnglish
Title of host publication2018 IEEE 11th International Conference on Cloud Computing (CLOUD)
EditorsR. Bilof
Place of PublicationPiscataway
PublisherIEEE
Pages220-228
Number of pages9
ISBN (Electronic)978-1-5386-7235-8
ISBN (Print)978-1-5386-7236-5
DOIs
Publication statusPublished - 2018
EventCLOUD 2018 IEEE : 11th International Conference on Cloud Computing (CLOUD) - San Francisco, United States
Duration: 10 Sep 201810 Sep 2018

Conference

ConferenceCLOUD 2018 IEEE
CountryUnited States
CitySan Francisco
Period10/09/1810/09/18

    Research areas

  • Apache Cassandra, CAPI, CAPI Flash, Cloud Computing, Distributed Databases, Flash Storage, High performance Caching, High Throughput Caching, NoSQL Databases, Nvme Flash, Persistent Caching, Power Systems, Read Cache, Solid State Drives, Storage Accelerators

ID: 67045818