DOI

Snappy is a widely used (de) compression algorithm in many big data applications. Such a data compression technique has been proven to be successful to save storage space and to reduce the amount of data transmission from/to storage devices. In this paper, we present a fine-grained parallel Snappy decompressor on FPGAs running under a relaxed execution model that addresses the following main challenges in existing solutions. First, existing designs either can only process one token per cycle or can process multiple tokens per cycle with low area efficiency and/or low clock frequency. Second, the high read-after-write data dependency during decompression introduces stalls which pull down the throughput.
Original languageEnglish
Title of host publication2019 IEEE 27th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM)
Subtitle of host publicationProceedings
PublisherIEEE
Pages335-335
Number of pages1
ISBN (Electronic)978-1-7281-1131-5
ISBN (Print)978-1-7281-1132-2
DOIs
Publication statusPublished - 2019
Event27th Annual IEEE International Symposium on Field-Programmable Custom Computing Machines, FCCM 2019 - San Diego, United States
Duration: 28 Apr 20191 May 2019

Conference

Conference27th Annual IEEE International Symposium on Field-Programmable Custom Computing Machines, FCCM 2019
CountryUnited States
CitySan Diego
Period28/04/191/05/19

    Research areas

  • FPGA, Fine grained Parallelism, Snappy Decompressor

ID: 55478564