{"id":13450017,"url":"https://github.com/fast-pack/dictionary","last_synced_at":"2025-07-11T10:07:10.711Z","repository":{"id":45705736,"uuid":"65499078","full_name":"fast-pack/dictionary","owner":"fast-pack","description":"High-performance dictionary coding","archived":false,"fork":false,"pushed_at":"2017-04-05T14:06:12.000Z","size":173,"stargazers_count":104,"open_issues_count":0,"forks_count":10,"subscribers_count":8,"default_branch":"master","last_synced_at":"2025-07-10T15:48:54.938Z","etag":null,"topics":["integer-compression","simd","simd-instructions"],"latest_commit_sha":null,"homepage":"","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/fast-pack.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2016-08-11T20:20:58.000Z","updated_at":"2024-12-18T14:39:31.000Z","dependencies_parsed_at":"2022-09-18T20:21:47.156Z","dependency_job_id":null,"html_url":"https://github.com/fast-pack/dictionary","commit_stats":null,"previous_names":["fast-pack/dictionary"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/fast-pack/dictionary","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/fast-pack%2Fdictionary","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/fast-pack%2Fdictionary/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/fast-pack%2Fdictionary/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/fast-pack%2Fdictionary/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/fast-pack","download_url":"https://codeload.github.com/fast-pack/dictionary/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/fast-pack%2Fdictionary/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":264781071,"owners_count":23662786,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["integer-compression","simd","simd-instructions"],"created_at":"2024-07-31T07:00:27.159Z","updated_at":"2025-07-11T10:07:10.694Z","avatar_url":"https://github.com/fast-pack.png","language":"C++","readme":"# dictionary\nHigh-performance dictionary coding\n\nSuppose you want to compress a large array of values with\n(relatively) few distinct values. For example, maybe you have 16 distinct 64-bit\nvalues. Only four bits are needed to store a value in the range [0,16) using\nbinary packing,  so if you have long arrays, it is possible to save 60 bits per value (compress\nthe data by a factor of 16).\n\n\nWe consider the following (simple) form of dictionary coding. We\nhave a dictionary of 64-bit values (could be pointers) stored\nin an array. In the compression phase, we convert the values to indexes\nand binary pack them. In the decompression phase, we\ntry to recover the dictionary-coded values as fast as possible.\n\nDictionary coding is in common use within database systems (e.g., Oracle, Parquet and so forth).\n\nWe are going to assume that one has a recent Intel processor\nfor the sake of this experiment.\n\n## Core Idea\n\nIt is tempting in dictionary coding, to first unpack the indexes to a temporary buffer\nand then run through it and look-up the values in the dictionary. What if it were possible\nto decode the indexes and look-up the values in the dictionary in one step?\nIt is possible with vector instructions as long as you have access to a ``gather``\ninstruction. Thankfully, recent commodity x64 processors have such an instruction.\n\n## A word on RAM access\n\nThere is no slower processor than an idle processor waiting for the memory\nsubsystem.\n\nWhen working with large data sets, it is tempting to decompress them from RAM\nto RAM, converting gigabytes of compressed data into (many more) gigabytes of\nuncompressed data.\n\nIf the purpose of compression is to keep more of the data close to the CPU,\nthen this is wasteful.\n\nOne should engineer applications so as to work on cache-friendly blocks. For\nexample, if have an array made of billions of values, instead of decoding them\nall to RAM, and then reading them, it is much better to decode them in small blocks\nat a time. In fact, ideally, one would prefer not to decode the data at all if possible:\nworking directly over the compressed data would be ideal.\n\nIf you must decode gigabytes of data to RAM or to disk, then you should expect\nto be wasting enormous quantities of CPU cycles.\n\n## Usage\n\n```bash\nmake \u0026\u0026 make test\n./decodebenchmark\n```\n\n## Experimental results (Skylake, August 24th 2016)\n\nWe find that an AVX2 dictionary decoder can be more than twice as fast as a good scalar decoder\non a recent Intel processor (Skylake) for modest dictionary sizes. Even with large\ndictionaries, the AVX2 gather approach is still remarkably faster. See results below. We expect results on older\nIntel architectures to be less impressive because the ``vpgather`` instruction that we use was\nquite slow in its early incarnations.\n\nThe case with large dictionary as implemented here is somewhat pessimistic as it assumes\nthat all values are equally likely. In most instances, a dictionary will have frequent\nvalues, more likely to be repeated. This will reduce the number of cache misses.\n\nAlso, in practice one might limit the size of the dictionary by horizontal partitions.\n\n```bash\n$ ./decodebenchmark\nFor this benchmark, use a recent (Skylake) Intel processor for best results.\nIntel processor:  Skylake     compiler version: 5.3.0 20151204        AVX2 is available.\nUsing array sizes of 8388608 values or 65536 kiB.\ntesting with dictionary of size 2\nActual dict size: 2\n        scalarcodec.uncompress(t,newbuf):  4.00 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  3.06 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  3.45 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  1.91 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  1.15 cycles per decoded value\n\ntesting with dictionary of size 4\nActual dict size: 4\n        scalarcodec.uncompress(t,newbuf):  3.99 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  3.06 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  3.46 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  1.91 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  1.19 cycles per decoded value\n\ntesting with dictionary of size 8\nActual dict size: 8\n        scalarcodec.uncompress(t,newbuf):  3.52 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  2.38 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  3.49 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  1.93 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  1.17 cycles per decoded value\n\ntesting with dictionary of size 16\nActual dict size: 16\n        scalarcodec.uncompress(t,newbuf):  4.01 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  3.08 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  3.50 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  1.95 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  1.19 cycles per decoded value\n\ntesting with dictionary of size 32\nActual dict size: 32\n        scalarcodec.uncompress(t,newbuf):  4.02 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  3.06 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  3.51 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  1.96 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  1.18 cycles per decoded value\n\ntesting with dictionary of size 64\nActual dict size: 64\n        scalarcodec.uncompress(t,newbuf):  4.02 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  3.08 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  3.54 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  1.98 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  1.17 cycles per decoded value\n\ntesting with dictionary of size 128\nActual dict size: 128\n        scalarcodec.uncompress(t,newbuf):  3.59 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  2.35 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  3.55 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  1.99 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  1.14 cycles per decoded value\n\ntesting with dictionary of size 256\nActual dict size: 256\n        scalarcodec.uncompress(t,newbuf):  4.03 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  3.10 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  3.55 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  2.00 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  1.22 cycles per decoded value\n\ntesting with dictionary of size 512\nActual dict size: 512\n        scalarcodec.uncompress(t,newbuf):  4.04 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  3.11 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  3.55 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  2.01 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  1.20 cycles per decoded value\n\ntesting with dictionary of size 1024\nActual dict size: 1024\n        scalarcodec.uncompress(t,newbuf):  4.04 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  3.11 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  3.57 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  2.04 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  1.18 cycles per decoded value\n\ntesting with dictionary of size 2048\nActual dict size: 2048\n        scalarcodec.uncompress(t,newbuf):  4.08 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  3.15 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  3.67 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  2.05 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  1.22 cycles per decoded value\n\ntesting with dictionary of size 4096\nActual dict size: 4096\n        scalarcodec.uncompress(t,newbuf):  4.14 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  3.33 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  3.69 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  2.12 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  1.32 cycles per decoded value\n\ntesting with dictionary of size 8192\nActual dict size: 8192\n        scalarcodec.uncompress(t,newbuf):  4.35 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  3.65 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  3.85 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  2.28 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  1.67 cycles per decoded value\n\ntesting with dictionary of size 16384\nActual dict size: 16384\n        scalarcodec.uncompress(t,newbuf):  4.51 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  3.95 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  4.07 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  2.55 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  2.12 cycles per decoded value\n\ntesting with dictionary of size 32768\nActual dict size: 32768\n        scalarcodec.uncompress(t,newbuf):  4.88 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  3.84 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  4.89 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  3.52 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  3.02 cycles per decoded value\n\ntesting with dictionary of size 65536\nActual dict size: 65536\n        scalarcodec.uncompress(t,newbuf):  7.14 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  5.47 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  6.68 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  5.18 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  4.53 cycles per decoded value\n\ntesting with dictionary of size 131072\nActual dict size: 131072\n        scalarcodec.uncompress(t,newbuf):  7.96 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  6.05 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  7.53 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  6.01 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  5.43 cycles per decoded value\n\ntesting with dictionary of size 262144\nActual dict size: 262144\n        scalarcodec.uncompress(t,newbuf):  8.30 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  6.35 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  8.08 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  6.46 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  5.66 cycles per decoded value\n\ntesting with dictionary of size 524288\nActual dict size: 524288\n        scalarcodec.uncompress(t,newbuf):  8.48 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  6.39 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  8.09 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  6.44 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  5.83 cycles per decoded value\n\ntesting with dictionary of size 1048576\nActual dict size: 1048235\n        scalarcodec.uncompress(t,newbuf):  11.85 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  10.53 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  11.65 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  8.47 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  8.07 cycles per decoded value\n```\n\n## Experimental results (Knights Landing, August 24th 2016)\n\nWe find that an AVX-512 dictionary decoder can be than twice as fast as an AVX dictionary\ndecoder which is in turn twice as fast as a scalar decoder\non a recent Intel processor (Knights Landing) for modest dictionary sizes. \nThe case with large dictionary as implemented here is somewhat pessimistic as it assumes\nthat all values are equally likely.\n\n\n```bash\n$ ./decodebenchmark\nFor this benchmark, use a recent (Skylake) Intel processor for best results.\nIntel processor:  UNKNOWN     compiler version: 5.3.0        AVX2 is available.\nUsing array sizes of 8388608 values or 65536 kiB.\ntesting with dictionary of size 2\nActual dict size: 2\n        scalarcodec.uncompress(t,newbuf):  7.75 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  7.39 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  6.26 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  3.22 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  3.06 cycles per decoded value\nAVX512DictCODEC::fastuncompress(t,newbuf):  1.48 cycles per decoded value\n  AVX512decodetocache(\u0026t,newbuf,bufsize):  1.14 cycles per decoded value\n\ntesting with dictionary of size 4\nActual dict size: 4\n        scalarcodec.uncompress(t,newbuf):  7.83 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  7.49 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  6.35 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  3.23 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  3.10 cycles per decoded value\nAVX512DictCODEC::fastuncompress(t,newbuf):  1.49 cycles per decoded value\n  AVX512decodetocache(\u0026t,newbuf,bufsize):  1.21 cycles per decoded value\n\ntesting with dictionary of size 8\nActual dict size: 8\n        scalarcodec.uncompress(t,newbuf):  7.27 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  6.99 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  6.17 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  3.23 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  3.10 cycles per decoded value\nAVX512DictCODEC::fastuncompress(t,newbuf):  1.59 cycles per decoded value\n  AVX512decodetocache(\u0026t,newbuf,bufsize):  1.25 cycles per decoded value\n\ntesting with dictionary of size 16\nActual dict size: 16\n        scalarcodec.uncompress(t,newbuf):  7.98 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  7.65 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  6.32 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  3.23 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  3.16 cycles per decoded value\nAVX512DictCODEC::fastuncompress(t,newbuf):  1.68 cycles per decoded value\n  AVX512decodetocache(\u0026t,newbuf,bufsize):  1.34 cycles per decoded value\n\ntesting with dictionary of size 32\nActual dict size: 32\n        scalarcodec.uncompress(t,newbuf):  7.92 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  7.63 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  6.27 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  3.23 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  3.19 cycles per decoded value\nAVX512DictCODEC::fastuncompress(t,newbuf):  1.65 cycles per decoded value\n  AVX512decodetocache(\u0026t,newbuf,bufsize):  1.43 cycles per decoded value\n\ntesting with dictionary of size 64\nActual dict size: 64\n        scalarcodec.uncompress(t,newbuf):  8.05 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  7.76 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  6.32 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  3.31 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  3.25 cycles per decoded value\nAVX512DictCODEC::fastuncompress(t,newbuf):  1.85 cycles per decoded value\n  AVX512decodetocache(\u0026t,newbuf,bufsize):  1.66 cycles per decoded value\n\ntesting with dictionary of size 128\nActual dict size: 128\n        scalarcodec.uncompress(t,newbuf):  6.64 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  6.36 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  6.19 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  3.34 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  3.28 cycles per decoded value\nAVX512DictCODEC::fastuncompress(t,newbuf):  1.83 cycles per decoded value\n  AVX512decodetocache(\u0026t,newbuf,bufsize):  1.57 cycles per decoded value\n\ntesting with dictionary of size 256\nActual dict size: 256\n        scalarcodec.uncompress(t,newbuf):  8.07 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  7.87 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  6.39 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  3.39 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  3.35 cycles per decoded value\nAVX512DictCODEC::fastuncompress(t,newbuf):  1.95 cycles per decoded value\n  AVX512decodetocache(\u0026t,newbuf,bufsize):  1.69 cycles per decoded value\n\ntesting with dictionary of size 512\nActual dict size: 512\n        scalarcodec.uncompress(t,newbuf):  8.07 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  7.87 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  6.32 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  3.52 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  3.48 cycles per decoded value\nAVX512DictCODEC::fastuncompress(t,newbuf):  2.04 cycles per decoded value\n  AVX512decodetocache(\u0026t,newbuf,bufsize):  1.76 cycles per decoded value\n\ntesting with dictionary of size 1024\nActual dict size: 1024\n        scalarcodec.uncompress(t,newbuf):  8.22 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  7.97 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  6.43 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  3.63 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  3.57 cycles per decoded value\nAVX512DictCODEC::fastuncompress(t,newbuf):  2.05 cycles per decoded value\n  AVX512decodetocache(\u0026t,newbuf,bufsize):  1.83 cycles per decoded value\n\ntesting with dictionary of size 2048\nActual dict size: 2048\n        scalarcodec.uncompress(t,newbuf):  7.97 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  7.69 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  6.37 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  3.76 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  3.64 cycles per decoded value\nAVX512DictCODEC::fastuncompress(t,newbuf):  2.11 cycles per decoded value\n  AVX512decodetocache(\u0026t,newbuf,bufsize):  1.91 cycles per decoded value\n\ntesting with dictionary of size 4096\nActual dict size: 4096\n        scalarcodec.uncompress(t,newbuf):  8.53 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  8.20 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  6.67 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  3.58 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  3.56 cycles per decoded value\nAVX512DictCODEC::fastuncompress(t,newbuf):  2.55 cycles per decoded value\n  AVX512decodetocache(\u0026t,newbuf,bufsize):  2.35 cycles per decoded value\n\ntesting with dictionary of size 8192\nActual dict size: 8192\n        scalarcodec.uncompress(t,newbuf):  8.66 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  8.27 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  6.79 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  3.92 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  3.86 cycles per decoded value\nAVX512DictCODEC::fastuncompress(t,newbuf):  2.80 cycles per decoded value\n  AVX512decodetocache(\u0026t,newbuf,bufsize):  2.54 cycles per decoded value\n\ntesting with dictionary of size 16384\nActual dict size: 16384\n        scalarcodec.uncompress(t,newbuf):  8.85 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  8.55 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  6.95 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  4.05 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  3.87 cycles per decoded value\nAVX512DictCODEC::fastuncompress(t,newbuf):  3.14 cycles per decoded value\n  AVX512decodetocache(\u0026t,newbuf,bufsize):  2.96 cycles per decoded value\n\ntesting with dictionary of size 32768\nActual dict size: 32768\n        scalarcodec.uncompress(t,newbuf):  6.75 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  6.81 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  6.94 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  3.68 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  3.58 cycles per decoded value\nAVX512DictCODEC::fastuncompress(t,newbuf):  3.41 cycles per decoded value\n  AVX512decodetocache(\u0026t,newbuf,bufsize):  3.24 cycles per decoded value\n\ntesting with dictionary of size 65536\nActual dict size: 65536\n        scalarcodec.uncompress(t,newbuf):  11.75 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  13.76 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  9.64 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  5.29 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  5.50 cycles per decoded value\nAVX512DictCODEC::fastuncompress(t,newbuf):  4.54 cycles per decoded value\n  AVX512decodetocache(\u0026t,newbuf,bufsize):  4.66 cycles per decoded value\n\ntesting with dictionary of size 131072\nActual dict size: 131072\n        scalarcodec.uncompress(t,newbuf):  19.07 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  19.53 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  17.02 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  11.02 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  11.01 cycles per decoded value\nAVX512DictCODEC::fastuncompress(t,newbuf):  8.03 cycles per decoded value\n  AVX512decodetocache(\u0026t,newbuf,bufsize):  8.01 cycles per decoded value\n\ntesting with dictionary of size 262144\nActual dict size: 262144\n        scalarcodec.uncompress(t,newbuf):  22.84 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  23.12 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  20.63 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  16.57 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  16.45 cycles per decoded value\nAVX512DictCODEC::fastuncompress(t,newbuf):  13.68 cycles per decoded value\n  AVX512decodetocache(\u0026t,newbuf,bufsize):  13.69 cycles per decoded value\n\ntesting with dictionary of size 524288\nActual dict size: 524288\n        scalarcodec.uncompress(t,newbuf):  22.34 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  22.54 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  20.36 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  16.30 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  16.34 cycles per decoded value\nAVX512DictCODEC::fastuncompress(t,newbuf):  14.91 cycles per decoded value\n  AVX512decodetocache(\u0026t,newbuf,bufsize):  14.94 cycles per decoded value\n\ntesting with dictionary of size 1048576\nActual dict size: 1048235\n        scalarcodec.uncompress(t,newbuf):  21.93 cycles per decoded value\n   decodetocache(\u0026sc, \u0026t,newbuf,bufsize):  22.11 cycles per decoded value\n           avxcodec.uncompress(t,newbuf):  19.91 cycles per decoded value\n  AVXDictCODEC::fastuncompress(t,newbuf):  16.33 cycles per decoded value\n     AVXdecodetocache(\u0026t,newbuf,bufsize):  16.30 cycles per decoded value\nAVX512DictCODEC::fastuncompress(t,newbuf):  15.32 cycles per decoded value\n  AVX512decodetocache(\u0026t,newbuf,bufsize):  15.31 cycles per decoded value\n\n```\n\n## Limitations\n- We support just one dictionary. In practice, one might want to use horizontal partitions.\n- We do not have a realistic usage of the dictionary values (we use a uniform distribution).\n- For simplicity, we assume that the dictionary is made of 64-bit words. It is hard-coded in the code, but not a fundamental limitation: the code would be faster with smaller words.\n- This code is not meant to be use in production. It is a demo.\n- This code makes up its own convenient format. It is not meant to plug as-is into an existing framework.\n- We assume that the arrays are large. If you have tiny arrays... well...\n- We effectively measure steady-state throughput. So we ignore costs such as loading up the dictionary in CPU cache.\n\n## Authors\nDaniel Lemire and Eric Daniel (motivated by ``parquet-cpp``)\n\n\n## Other relevant libraries\n\n* SIMDCompressionAndIntersection: A C++ library to compress and intersect sorted lists of integers using SIMD instructions https://github.com/lemire/SIMDCompressionAndIntersect\n* The FastPFOR C++ library : Fast integer compression https://github.com/lemire/FastPFor\n* LittleIntPacker: C library to pack and unpack short arrays of integers as fast as possible https://github.com/lemire/LittleIntPacker\n* The SIMDComp library: A simple C library for compressing lists of integers using binary packing https://github.com/lemire/simdcomp\n* StreamVByte: Fast integer compression in C using the StreamVByte codec https://github.com/lemire/streamvbyte\n* MaskedVByte: Fast decoder for VByte-compressed integers https://github.com/lemire/MaskedVByte\n* CSharpFastPFOR: A C#  integer compression library  https://github.com/Genbox/CSharpFastPFOR\n* JavaFastPFOR: A java integer compression library https://github.com/lemire/JavaFastPFOR\n* Encoding: Integer Compression Libraries for Go https://github.com/zhenjl/encoding\n* FrameOfReference is a C++ library dedicated to frame-of-reference (FOR) compression: https://github.com/lemire/FrameOfReference\n* libvbyte: A fast implementation for varbyte 32bit/64bit integer compression https://github.com/cruppstahl/libvbyte\n* TurboPFor is a C library that offers lots of interesting optimizations. Well worth checking! (GPL license) https://github.com/powturbo/TurboPFor\n* Oroch is a C++ library that offers a usable API (MIT license) https://github.com/ademakov/Oroch\n\n","funding_links":[],"categories":["Parsing"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffast-pack%2Fdictionary","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ffast-pack%2Fdictionary","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffast-pack%2Fdictionary/lists"}