{"id":47809059,"url":"https://github.com/quantumaikr/quant.cpp","last_synced_at":"2026-04-08T00:02:14.648Z","repository":{"id":347693143,"uuid":"1194842235","full_name":"quantumaikr/quant.cpp","owner":"quantumaikr","description":"LLM inference with 7x longer context. Pure C, zero dependencies. Lossless KV cache compression + single-header library.","archived":false,"fork":false,"pushed_at":"2026-04-04T16:39:42.000Z","size":9956,"stargazers_count":179,"open_issues_count":4,"forks_count":29,"subscribers_count":4,"default_branch":"main","last_synced_at":"2026-04-04T20:29:58.998Z","etag":null,"topics":["delta-compression","embeddable","gguf","kv-cache","llm","llm-inference","pure-c","quantization","transformer","turboquant"],"latest_commit_sha":null,"homepage":null,"language":"C","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/quantumaikr.png","metadata":{"files":{"readme":"README.ko.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2026-03-28T22:07:24.000Z","updated_at":"2026-04-04T20:21:07.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/quantumaikr/quant.cpp","commit_stats":null,"previous_names":["quantumaikr/turboquant.cpp","quantumaikr/quant.cpp"],"tags_count":2,"template":false,"template_full_name":null,"purl":"pkg:github/quantumaikr/quant.cpp","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/quantumaikr%2Fquant.cpp","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/quantumaikr%2Fquant.cpp/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/quantumaikr%2Fquant.cpp/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/quantumaikr%2Fquant.cpp/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/quantumaikr","download_url":"https://codeload.github.com/quantumaikr/quant.cpp/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/quantumaikr%2Fquant.cpp/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31533824,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-07T16:28:08.000Z","status":"ssl_error","status_checked_at":"2026-04-07T16:28:06.951Z","response_time":105,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["delta-compression","embeddable","gguf","kv-cache","llm","llm-inference","pure-c","quantization","transformer","turboquant"],"created_at":"2026-04-03T18:01:37.864Z","updated_at":"2026-04-08T00:02:14.626Z","avatar_url":"https://github.com/quantumaikr.png","language":"C","readme":"# quant.cpp\n\n![quant.cpp Hero](docs/assets/hero.png)\n\n로컬 LLM을 위한 미니멀 C 추론 엔진. 33K LOC. 외부 라이브러리 없음.\n\n[![License](https://img.shields.io/badge/license-Apache%202.0-blue)]()\n[![CI](https://img.shields.io/github/actions/workflow/status/quantumaikr/quant.cpp/ci.yml?label=CI)]()\n[![Tests](https://img.shields.io/badge/tests-34%20pass-brightgreen)]()\n\n---\n\n## 같은 하드웨어에서 4배 긴 컨텍스트\n\nDelta KV 압축으로 품질 손실 없이 4배 더 많은 컨텍스트를 처리합니다.\n\n| 하드웨어 | 모델 | Before | After | 배율 |\n|----------|------|--------|-------|------|\n| 8GB 노트북 | Llama 8B (Q4) | 16K 토큰 | 61K 토큰 | 3.8x |\n| 16GB Mac Air | SmolLM2 1.7B | 78K 토큰 | 298K 토큰 | 3.8x |\n| 24GB RTX 3090 | Llama 8B (Q4) | 147K 토큰 | 559K 토큰 | 3.8x |\n\n```bash\n./quant model.gguf -p \"hello\" -k uniform_4b -v q4\n```\n\n---\n\n## 왜 quant.cpp인가\n\n|  | quant.cpp | llama.cpp |\n|--|-----------|-----------|\n| 코드베이스 | 33K LOC, Pure C | 250K+ LOC, C++ |\n| KV 압축 품질 | PPL -3.2% (FP32보다 좋음) | PPL +10.6% |\n| 의존성 | zero (libc/libm only) | - |\n| 설계 목표 | 읽고, 이해하고, 수정 가능 | 기능 완성도 |\n\n같은 모델 (SmolLM2 1.7B), 같은 벤치마크. llama.cpp의 Q4_0 KV는 품질을 떨어뜨립니다. quant.cpp는 오히려 개선합니다.\n\n---\n\n## 빠른 시작\n\n```bash\ngit clone https://github.com/quantumaikr/quant.cpp \u0026\u0026 cd quant.cpp\ncmake -B build -DCMAKE_BUILD_TYPE=Release\ncmake --build build -j$(nproc)\n\n# 추론 실행\n./build/quant model.gguf -p \"hello\"\n\n# KV 압축 (4-bit K + Q4 V, 3.8x)\n./build/quant model.gguf -p \"hello\" -k uniform_4b -v q4\n\n# Delta 압축 (3-bit K + Q4 V, 4.3x)\n./build/quant model.gguf -p \"hello\" -k uniform_3b -v q4 --delta\n\n# PPL 측정\n./build/quant model.gguf --ppl input.txt -k uniform_4b -v q4\n```\n\n---\n\n## KV 캐시 압축\n\n### 압축 모드\n\n| 구성 | 압축률 | PPL vs FP32 | 용도 |\n|------|--------|-------------|------|\n| delta + 3b K + Q4 V | ~4.3x | -3.2% | 최대 압축 |\n| delta + 4b K + Q4 V | ~3.8x | -12.2% | 최고 품질 |\n| uniform 4b K + Q4 V | 3.8x | -7.8% | 심플, delta 오버헤드 없음 |\n| uniform 4b K + FP16 V | 1.6x | +0.0% | 무손실 |\n\n### Delta 압축 원리\n\n표준 KV 캐시는 각 key를 그대로 저장합니다. Delta 압축은 인접 key의 *차이*를 저장합니다 — 비디오의 P-frame과 I-frame처럼.\n\n트랜스포머의 인접 key는 절대값 범위의 ~30%만 차이납니다. 이 작은 범위 덕분에 3-bit 양자화로 충분합니다. Delta 없이 3-bit는 PPL +62%. Delta와 함께라면 PPL -3.2%.\n\n64 토큰마다 FP32 I-frame을 저장하여 누적 드리프트를 방지합니다.\n\n### 전체 PPL 결과 (SmolLM2 1.7B, 999 토큰)\n\n| 구성 | PPL | vs FP32 | 비고 |\n|------|-----|---------|------|\n| FP32 baseline | 14.58 | -- | 기준 |\n| delta + 4b K + Q4 V | 12.80 | -12.2% | 최고 품질 |\n| delta + 3b K + Q4 V | 14.11 | -3.2% | 최고 압축 |\n| uniform 4b K + Q4 V | 13.44 | -7.8% | 검증됨 |\n| uniform 3b K + Q4 V (no delta) | 23.62 | +62% | delta 필수 |\n\n### 모델별 검증 (4b K + Q4 V)\n\n| 모델 | PPL 변화 |\n|------|----------|\n| SmolLM2 1.7B | -1.6% |\n| Qwen3.5 0.8B | +0.9% |\n| Qwen3.5 4B | +0.6% |\n\n---\n\n## 지원 모델\n\n| 모델 | 아키텍처 | 파라미터 | KV 검증 |\n|------|----------|----------|---------|\n| SmolLM2-1.7B | Llama | 1.7B | PPL -1.6% |\n| Qwen3.5-0.8B | Qwen3.5 (DeltaNet) | 752M | PPL +0.9% |\n| Qwen3.5-4B | Qwen3.5 (DeltaNet) | 4B | PPL +0.6% |\n| Qwen3.5-35B-A3B | Qwen2-MoE | 35B (3B active) | 4-bit K verified |\n| Gemma 3 270M | Gemma 3 | 270M | 4-bit K verified |\n| Gemma 4 E2B | Gemma 4 | 2B | WIP |\n\n아키텍처: Llama/Qwen3.5 (공유 경로), Gemma 3/4 (sliding + full attention), Qwen2-MoE.\n\n---\n\n## FAQ\n\n**Delta 압축은 어떻게 작동하나요?**\n\n각 key를 직접 저장하는 대신 `key[t] - reconstruct(key[t-1])`을 저장합니다. 트랜스포머의 인접 key는 높은 상관관계를 가지므로 delta의 범위가 절대값의 ~30%입니다. 64 토큰마다 full-precision I-frame으로 드리프트를 방지합니다.\n\n**llama.cpp와 뭐가 다른가요?**\n\nquant.cpp는 독립 추론 엔진입니다 (33K LOC, Pure C) — 포크나 래퍼가 아닙니다. KV 압축에서 핵심 차이: llama.cpp Q4_0은 SmolLM2 1.7B에서 PPL +10.6%. quant.cpp의 4-bit K는 같은 모델에서 PPL +0.0%.\n\n**3-bit 이하는요?**\n\n광범위하게 테스트했습니다: 2-bit delta, sub-block scaling, multi-hash, error feedback, NF2, online SVD 등. 어떤 접근도 허용 가능한 품질을 달성하지 못했습니다. 근본 장벽: step당 코사인 유사도 0.997이 200 step 후 0.885로 누적됩니다. 3-bit + delta가 실용적 최소입니다.\n\n---\n\n**[QuantumAI](https://quantumai.kr)** | [GitHub](https://github.com/quantumaikr/quant.cpp)\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fquantumaikr%2Fquant.cpp","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fquantumaikr%2Fquant.cpp","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fquantumaikr%2Fquant.cpp/lists"}