Documents

DOI

While in-memory databases have largely removed I/O as a bottleneck for database operations, loading the data from storage into memory remains a significant limiter to end-to end performance. Snappy is a widely used compression algorithm in the Hadoop ecosystem and in database systems and is an option in often-used file formats such as Parquet and ORC. Compression reduces the amount of data that must be transferred from/to the storage saving both storage space and storage bandwidth. While it is easy for a CPU Snappy decompressor to keep up with the bandwidth of a hard disk drive, when moving to NVMe devices attached with high bandwidth connections such as PCIe Gen4 or OpenCAPI, the decompression speed in a CPU is insufficient. We propose an FPGA-based Snappy decompressor that can process multiple tokens in parallel and operates on each FPGA block ram independently. Read commands are recycled until the read data is valid dramatically reducing control complexity. One instance of our decompression engine takes 9% of the LUTs in the XCKU15P FPGA, and achieves up to 3GB/s (5GB/s) decompression rate from the input (output) side, about an order of magnitude faster than a CPU (single thread). Parquet allows for independent decompression of multiple pages and instantiating eight of these units on a XCKU15P FPGA can keep up with the highest performance interface bandwidths.
Original languageEnglish
Title of host publication2018 International Conference on Hardware/Software Codesign and System Synthesis (CODES+ ISSS)
PublisherIEEE
Pages1-2
Number of pages2
ISBN (Electronic)978-1-5386-5562-7
ISBN (Print)978-1-5386-5563-4
DOIs
Publication statusPublished - 30 Sep 2018
EventCODES+ISSS: 2018 International Conference on Hardware/Software Codesign and System Synthesis - Torino, Italy
Duration: 30 Sep 20185 Oct 2018

Conference

ConferenceCODES+ISSS: 2018 International Conference on Hardware/Software Codesign and System Synthesis
Abbreviated titleCODES+ISSS
CountryItaly
CityTorino
Period30/09/185/10/18

ID: 47419306