{"id":19361031,"url":"https://github.com/fminference/h2o","last_synced_at":"2025-04-05T01:10:01.160Z","repository":{"id":186198876,"uuid":"652462927","full_name":"FMInference/H2O","owner":"FMInference","description":"[NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.","archived":false,"fork":false,"pushed_at":"2024-08-01T19:41:25.000Z","size":41030,"stargazers_count":433,"open_issues_count":35,"forks_count":54,"subscribers_count":5,"default_branch":"main","last_synced_at":"2025-03-29T00:12:38.674Z","etag":null,"topics":["gpt-3","heavy-hitters","high-throughput","kv-cache","large-language-models","sparsity"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/FMInference.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-06-12T06:03:19.000Z","updated_at":"2025-03-24T11:54:47.000Z","dependencies_parsed_at":"2024-11-10T07:31:33.208Z","dependency_job_id":null,"html_url":"https://github.com/FMInference/H2O","commit_stats":null,"previous_names":["fminference/h2o"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/FMInference%2FH2O","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/FMInference%2FH2O/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/FMInference%2FH2O/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/FMInference%2FH2O/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/FMInference","download_url":"https://codeload.github.com/FMInference/H2O/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247271532,"owners_count":20911587,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["gpt-3","heavy-hitters","high-throughput","kv-cache","large-language-models","sparsity"],"created_at":"2024-11-10T07:20:16.997Z","updated_at":"2025-04-05T01:09:56.146Z","avatar_url":"https://github.com/FMInference.png","language":"Python","readme":"# H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models\n\n[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT)\n\nCode for the paper \"**H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models**\"\n\nZhenyu Zhang, Ying Sheng, Tianyi Zhou, Tianlong Chen, Lianmin Zheng, Ruisi Cai, Zhao Song, Yuandong Tian, Christopher Ré, Clark Barrett, Zhangyang Wang, Beidi Chen\n\n## Overview\n\nLarge Language Models (LLMs), despite their recent impressive accomplishments, are notably cost-prohibitive to deploy, particularly for applications involving long-content generation, such as dialogue\nsystems and story writing. Often, a large amount of transient state information, referred to as the KV\ncache, is stored in GPU memory in addition to model parameters, scaling linearly with the sequence\nlength and batch size. In this paper, we introduce a novel approach for implementing the KV cache which\nsignificantly reduces its memory footprint. Our approach is based on the noteworthy observation that a\nsmall portion of tokens contributes most of the value when computing attention scores. We call these\ntokens Heavy Hitters (H2). Through a comprehensive investigation, we find that (i) the emergence of H2\nis natural and strongly correlates with the frequent co-occurrence of tokens in the text, and (ii) removing\nthem results in significant performance degradation. Based on these insights, we propose Heavy Hitter\nOracle (H2O), a KV cache eviction policy that dynamically retains a balance of recent and H2 tokens. We\nformulate the KV cache eviction as a dynamic submodular problem and prove (under mild assumptions)\na theoretical guarantee for our novel eviction algorithm which could help guide future work. We validate\nthe accuracy of our algorithm with OPT, LLaMA, and GPT-NeoX across a wide range of tasks. Our\nimplementation of H2O with 20% heavy hitters improves the throughput over three leading inference\nsystems DeepSpeed Zero-Inference, Hugging Face Accelerate, and FlexGen by up to 29×, 29×, and 3×\non OPT-6.7B and OPT-30B. With the same batch size, H2O can reduce the latency by up to 1.9×.\n\n\u003cimg src = \"Figs/h2o.jpg\" align = \"center\" width=\"100%\" hight=\"100%\"\u003e\n\n## Content\n\nWe provide two code to implement heavy-hitter oracle for efficient generative inference of large language models:\n\n- [h2o_flexgen](h2o_flexgen/README.md): Achieving higher throughput for LLM generation, the code is based on [FlexGen](https://github.com/FMInference/FlexGen). \n- [h2o_hf](h2o_hf): Testing the performance on different benchmarks, the code is based on [Hugging Face](https://github.com/huggingface/transformers). Both simulation code (masking attention matrix) and real KV dropping implementation are provided (please refer to h2o_hf/utils_real_drop).\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffminference%2Fh2o","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ffminference%2Fh2o","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffminference%2Fh2o/lists"}