{"id":17760765,"url":"https://github.com/microsoft/bitnet","last_synced_at":"2025-05-14T13:00:18.793Z","repository":{"id":258612628,"uuid":"838253246","full_name":"microsoft/BitNet","owner":"microsoft","description":"Official inference framework for 1-bit LLMs","archived":false,"fork":false,"pushed_at":"2025-04-29T05:11:50.000Z","size":2027,"stargazers_count":18219,"open_issues_count":109,"forks_count":1320,"subscribers_count":169,"default_branch":"main","last_synced_at":"2025-05-07T12:45:10.860Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/microsoft.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2024-08-05T09:07:38.000Z","updated_at":"2025-05-07T12:00:59.000Z","dependencies_parsed_at":"2024-11-16T18:00:26.943Z","dependency_job_id":"7fc6e6ce-c6c9-4e5a-932a-7cfd88d1ba6f","html_url":"https://github.com/microsoft/BitNet","commit_stats":null,"previous_names":["microsoft/bitnet"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/microsoft%2FBitNet","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/microsoft%2FBitNet/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/microsoft%2FBitNet/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/microsoft%2FBitNet/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/microsoft","download_url":"https://codeload.github.com/microsoft/BitNet/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254149597,"owners_count":22022846,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-10-26T19:12:39.554Z","updated_at":"2025-05-14T13:00:18.711Z","avatar_url":"https://github.com/microsoft.png","language":"C++","readme":"# bitnet.cpp\n[![License: MIT](https://img.shields.io/badge/license-MIT-blue.svg)](https://opensource.org/licenses/MIT)\n![version](https://img.shields.io/badge/version-1.0-blue)\n\n[\u003cimg src=\"./assets/header_model_release.png\" alt=\"BitNet Model on Hugging Face\" width=\"800\"/\u003e](https://huggingface.co/microsoft/BitNet-b1.58-2B-4T)\n\nTry it out via this [demo](https://bitnet-demo.azurewebsites.net/), or [build and run](https://github.com/microsoft/BitNet?tab=readme-ov-file#build-from-source) it on your own CPU.\n\nbitnet.cpp is the official inference framework for 1-bit LLMs (e.g., BitNet b1.58). It offers a suite of optimized kernels, that support **fast** and **lossless** inference of 1.58-bit models on CPU (with NPU and GPU support coming next).\n\nThe first release of bitnet.cpp is to support inference on CPUs. bitnet.cpp achieves speedups of **1.37x** to **5.07x** on ARM CPUs, with larger models experiencing greater performance gains. Additionally, it reduces energy consumption by **55.4%** to **70.0%**, further boosting overall efficiency. On x86 CPUs, speedups range from **2.37x** to **6.17x** with energy reductions between **71.9%** to **82.2%**. Furthermore, bitnet.cpp can run a 100B BitNet b1.58 model on a single CPU, achieving speeds comparable to human reading (5-7 tokens per second), significantly enhancing the potential for running LLMs on local devices. Please refer to the [technical report](https://arxiv.org/abs/2410.16144) for more details.\n\n\u003cimg src=\"./assets/m2_performance.jpg\" alt=\"m2_performance\" width=\"800\"/\u003e\n\u003cimg src=\"./assets/intel_performance.jpg\" alt=\"m2_performance\" width=\"800\"/\u003e\n\n\u003eThe tested models are dummy setups used in a research context to demonstrate the inference performance of bitnet.cpp.\n\n## Demo\n\nA demo of bitnet.cpp running a BitNet b1.58 3B model on Apple M2:\n\nhttps://github.com/user-attachments/assets/7f46b736-edec-4828-b809-4be780a3e5b1\n\n## What's New:\n- 04/14/2025 [BitNet Official 2B Parameter Model on Hugging Face](https://huggingface.co/microsoft/BitNet-b1.58-2B-4T) ![NEW](https://img.shields.io/badge/NEW-red)\n- 02/18/2025 [Bitnet.cpp: Efficient Edge Inference for Ternary LLMs](https://arxiv.org/abs/2502.11880)\n- 11/08/2024 [BitNet a4.8: 4-bit Activations for 1-bit LLMs](https://arxiv.org/abs/2411.04965)\n- 10/21/2024 [1-bit AI Infra: Part 1.1, Fast and Lossless BitNet b1.58 Inference on CPUs](https://arxiv.org/abs/2410.16144)\n- 10/17/2024 bitnet.cpp 1.0 released.\n- 03/21/2024 [The-Era-of-1-bit-LLMs__Training_Tips_Code_FAQ](https://github.com/microsoft/unilm/blob/master/bitnet/The-Era-of-1-bit-LLMs__Training_Tips_Code_FAQ.pdf)\n- 02/27/2024 [The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits](https://arxiv.org/abs/2402.17764)\n- 10/17/2023 [BitNet: Scaling 1-bit Transformers for Large Language Models](https://arxiv.org/abs/2310.11453)\n\n## Acknowledgements\n\nThis project is based on the [llama.cpp](https://github.com/ggerganov/llama.cpp) framework. We would like to thank all the authors for their contributions to the open-source community. Also, bitnet.cpp's kernels are built on top of the Lookup Table methodologies pioneered in [T-MAC](https://github.com/microsoft/T-MAC/). For inference of general low-bit LLMs beyond ternary models, we recommend using T-MAC.\n## Official Models\n\u003ctable\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003cth rowspan=\"2\"\u003eModel\u003c/th\u003e\n        \u003cth rowspan=\"2\"\u003eParameters\u003c/th\u003e\n        \u003cth rowspan=\"2\"\u003eCPU\u003c/th\u003e\n        \u003cth colspan=\"3\"\u003eKernel\u003c/th\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003cth\u003eI2_S\u003c/th\u003e\n        \u003cth\u003eTL1\u003c/th\u003e\n        \u003cth\u003eTL2\u003c/th\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd rowspan=\"2\"\u003e\u003ca href=\"https://huggingface.co/microsoft/BitNet-b1.58-2B-4T\"\u003eBitNet-b1.58-2B-4T\u003c/a\u003e\u003c/td\u003e\n        \u003ctd rowspan=\"2\"\u003e2.4B\u003c/td\u003e\n        \u003ctd\u003ex86\u003c/td\u003e\n        \u003ctd\u003e\u0026#9989;\u003c/td\u003e\n        \u003ctd\u003e\u0026#10060;\u003c/td\u003e\n        \u003ctd\u003e\u0026#9989;\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003eARM\u003c/td\u003e\n        \u003ctd\u003e\u0026#9989;\u003c/td\u003e\n        \u003ctd\u003e\u0026#9989;\u003c/td\u003e\n        \u003ctd\u003e\u0026#10060;\u003c/td\u003e\n    \u003c/tr\u003e\n\u003c/table\u003e\n\n## Supported Models\n❗️**We use existing 1-bit LLMs available on [Hugging Face](https://huggingface.co/) to demonstrate the inference capabilities of bitnet.cpp. We hope the release of bitnet.cpp will inspire the development of 1-bit LLMs in large-scale settings in terms of model size and training tokens.**\n\n\u003ctable\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003cth rowspan=\"2\"\u003eModel\u003c/th\u003e\n        \u003cth rowspan=\"2\"\u003eParameters\u003c/th\u003e\n        \u003cth rowspan=\"2\"\u003eCPU\u003c/th\u003e\n        \u003cth colspan=\"3\"\u003eKernel\u003c/th\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003cth\u003eI2_S\u003c/th\u003e\n        \u003cth\u003eTL1\u003c/th\u003e\n        \u003cth\u003eTL2\u003c/th\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd rowspan=\"2\"\u003e\u003ca href=\"https://huggingface.co/1bitLLM/bitnet_b1_58-large\"\u003ebitnet_b1_58-large\u003c/a\u003e\u003c/td\u003e\n        \u003ctd rowspan=\"2\"\u003e0.7B\u003c/td\u003e\n        \u003ctd\u003ex86\u003c/td\u003e\n        \u003ctd\u003e\u0026#9989;\u003c/td\u003e\n        \u003ctd\u003e\u0026#10060;\u003c/td\u003e\n        \u003ctd\u003e\u0026#9989;\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003eARM\u003c/td\u003e\n        \u003ctd\u003e\u0026#9989;\u003c/td\u003e\n        \u003ctd\u003e\u0026#9989;\u003c/td\u003e\n        \u003ctd\u003e\u0026#10060;\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd rowspan=\"2\"\u003e\u003ca href=\"https://huggingface.co/1bitLLM/bitnet_b1_58-3B\"\u003ebitnet_b1_58-3B\u003c/a\u003e\u003c/td\u003e\n        \u003ctd rowspan=\"2\"\u003e3.3B\u003c/td\u003e\n        \u003ctd\u003ex86\u003c/td\u003e\n        \u003ctd\u003e\u0026#10060;\u003c/td\u003e\n        \u003ctd\u003e\u0026#10060;\u003c/td\u003e\n        \u003ctd\u003e\u0026#9989;\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003eARM\u003c/td\u003e\n        \u003ctd\u003e\u0026#10060;\u003c/td\u003e\n        \u003ctd\u003e\u0026#9989;\u003c/td\u003e\n        \u003ctd\u003e\u0026#10060;\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd rowspan=\"2\"\u003e\u003ca href=\"https://huggingface.co/HF1BitLLM/Llama3-8B-1.58-100B-tokens\"\u003eLlama3-8B-1.58-100B-tokens\u003c/a\u003e\u003c/td\u003e\n        \u003ctd rowspan=\"2\"\u003e8.0B\u003c/td\u003e\n        \u003ctd\u003ex86\u003c/td\u003e\n        \u003ctd\u003e\u0026#9989;\u003c/td\u003e\n        \u003ctd\u003e\u0026#10060;\u003c/td\u003e\n        \u003ctd\u003e\u0026#9989;\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003eARM\u003c/td\u003e\n        \u003ctd\u003e\u0026#9989;\u003c/td\u003e\n        \u003ctd\u003e\u0026#9989;\u003c/td\u003e\n        \u003ctd\u003e\u0026#10060;\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd rowspan=\"2\"\u003e\u003ca href=\"https://huggingface.co/collections/tiiuae/falcon3-67605ae03578be86e4e87026\"\u003eFalcon3 Family\u003c/a\u003e\u003c/td\u003e\n        \u003ctd rowspan=\"2\"\u003e1B-10B\u003c/td\u003e\n        \u003ctd\u003ex86\u003c/td\u003e\n        \u003ctd\u003e\u0026#9989;\u003c/td\u003e\n        \u003ctd\u003e\u0026#10060;\u003c/td\u003e\n        \u003ctd\u003e\u0026#9989;\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003eARM\u003c/td\u003e\n        \u003ctd\u003e\u0026#9989;\u003c/td\u003e\n        \u003ctd\u003e\u0026#9989;\u003c/td\u003e\n        \u003ctd\u003e\u0026#10060;\u003c/td\u003e\n    \u003c/tr\u003e\n\u003c/table\u003e\n\n\n\n## Installation\n\n### Requirements\n- python\u003e=3.9\n- cmake\u003e=3.22\n- clang\u003e=18\n    - For Windows users, install [Visual Studio 2022](https://visualstudio.microsoft.com/downloads/). In the installer, toggle on at least the following options(this also automatically installs the required additional tools like CMake):\n        -  Desktop-development with C++\n        -  C++-CMake Tools for Windows\n        -  Git for Windows\n        -  C++-Clang Compiler for Windows\n        -  MS-Build Support for LLVM-Toolset (clang)\n    - For Debian/Ubuntu users, you can download with [Automatic installation script](https://apt.llvm.org/)\n\n        `bash -c \"$(wget -O - https://apt.llvm.org/llvm.sh)\"`\n- conda (highly recommend)\n\n### Build from source\n\n\u003e [!IMPORTANT]\n\u003e If you are using Windows, please remember to always use a Developer Command Prompt / PowerShell for VS2022 for the following commands. Please refer to the FAQs below if you see any issues.\n\n1. Clone the repo\n```bash\ngit clone --recursive https://github.com/microsoft/BitNet.git\ncd BitNet\n```\n2. Install the dependencies\n```bash\n# (Recommended) Create a new conda environment\nconda create -n bitnet-cpp python=3.9\nconda activate bitnet-cpp\n\npip install -r requirements.txt\n```\n3. Build the project\n```bash\n# Manually download the model and run with local path\nhuggingface-cli download microsoft/BitNet-b1.58-2B-4T-gguf --local-dir models/BitNet-b1.58-2B-4T\npython setup_env.py -md models/BitNet-b1.58-2B-4T -q i2_s\n\n```\n\u003cpre\u003e\nusage: setup_env.py [-h] [--hf-repo {1bitLLM/bitnet_b1_58-large,1bitLLM/bitnet_b1_58-3B,HF1BitLLM/Llama3-8B-1.58-100B-tokens,tiiuae/Falcon3-1B-Instruct-1.58bit,tiiuae/Falcon3-3B-Instruct-1.58bit,tiiuae/Falcon3-7B-Instruct-1.58bit,tiiuae/Falcon3-10B-Instruct-1.58bit}] [--model-dir MODEL_DIR] [--log-dir LOG_DIR] [--quant-type {i2_s,tl1}] [--quant-embd]\n                    [--use-pretuned]\n\nSetup the environment for running inference\n\noptional arguments:\n  -h, --help            show this help message and exit\n  --hf-repo {1bitLLM/bitnet_b1_58-large,1bitLLM/bitnet_b1_58-3B,HF1BitLLM/Llama3-8B-1.58-100B-tokens,tiiuae/Falcon3-1B-Instruct-1.58bit,tiiuae/Falcon3-3B-Instruct-1.58bit,tiiuae/Falcon3-7B-Instruct-1.58bit,tiiuae/Falcon3-10B-Instruct-1.58bit}, -hr {1bitLLM/bitnet_b1_58-large,1bitLLM/bitnet_b1_58-3B,HF1BitLLM/Llama3-8B-1.58-100B-tokens,tiiuae/Falcon3-1B-Instruct-1.58bit,tiiuae/Falcon3-3B-Instruct-1.58bit,tiiuae/Falcon3-7B-Instruct-1.58bit,tiiuae/Falcon3-10B-Instruct-1.58bit}\n                        Model used for inference\n  --model-dir MODEL_DIR, -md MODEL_DIR\n                        Directory to save/load the model\n  --log-dir LOG_DIR, -ld LOG_DIR\n                        Directory to save the logging info\n  --quant-type {i2_s,tl1}, -q {i2_s,tl1}\n                        Quantization type\n  --quant-embd          Quantize the embeddings to f16\n  --use-pretuned, -p    Use the pretuned kernel parameters\n\u003c/pre\u003e\n## Usage\n### Basic usage\n```bash\n# Run inference with the quantized model\npython run_inference.py -m models/BitNet-b1.58-2B-4T/ggml-model-i2_s.gguf -p \"You are a helpful assistant\" -cnv\n```\n\u003cpre\u003e\nusage: run_inference.py [-h] [-m MODEL] [-n N_PREDICT] -p PROMPT [-t THREADS] [-c CTX_SIZE] [-temp TEMPERATURE] [-cnv]\n\nRun inference\n\noptional arguments:\n  -h, --help            show this help message and exit\n  -m MODEL, --model MODEL\n                        Path to model file\n  -n N_PREDICT, --n-predict N_PREDICT\n                        Number of tokens to predict when generating text\n  -p PROMPT, --prompt PROMPT\n                        Prompt to generate text from\n  -t THREADS, --threads THREADS\n                        Number of threads to use\n  -c CTX_SIZE, --ctx-size CTX_SIZE\n                        Size of the prompt context\n  -temp TEMPERATURE, --temperature TEMPERATURE\n                        Temperature, a hyperparameter that controls the randomness of the generated text\n  -cnv, --conversation  Whether to enable chat mode or not (for instruct models.)\n                        (When this option is turned on, the prompt specified by -p will be used as the system prompt.)\n\u003c/pre\u003e\n\n### Benchmark\nWe provide scripts to run the inference benchmark providing a model.\n\n```  \nusage: e2e_benchmark.py -m MODEL [-n N_TOKEN] [-p N_PROMPT] [-t THREADS]  \n   \nSetup the environment for running the inference  \n   \nrequired arguments:  \n  -m MODEL, --model MODEL  \n                        Path to the model file. \n   \noptional arguments:  \n  -h, --help  \n                        Show this help message and exit. \n  -n N_TOKEN, --n-token N_TOKEN  \n                        Number of generated tokens. \n  -p N_PROMPT, --n-prompt N_PROMPT  \n                        Prompt to generate text from. \n  -t THREADS, --threads THREADS  \n                        Number of threads to use. \n```  \n   \nHere's a brief explanation of each argument:  \n   \n- `-m`, `--model`: The path to the model file. This is a required argument that must be provided when running the script.  \n- `-n`, `--n-token`: The number of tokens to generate during the inference. It is an optional argument with a default value of 128.  \n- `-p`, `--n-prompt`: The number of prompt tokens to use for generating text. This is an optional argument with a default value of 512.  \n- `-t`, `--threads`: The number of threads to use for running the inference. It is an optional argument with a default value of 2.  \n- `-h`, `--help`: Show the help message and exit. Use this argument to display usage information.  \n   \nFor example:  \n   \n```sh  \npython utils/e2e_benchmark.py -m /path/to/model -n 200 -p 256 -t 4  \n```  \n   \nThis command would run the inference benchmark using the model located at `/path/to/model`, generating 200 tokens from a 256 token prompt, utilizing 4 threads.  \n\nFor the model layout that do not supported by any public model, we provide scripts to generate a dummy model with the given model layout, and run the benchmark on your machine:\n\n```bash\npython utils/generate-dummy-bitnet-model.py models/bitnet_b1_58-large --outfile models/dummy-bitnet-125m.tl1.gguf --outtype tl1 --model-size 125M\n\n# Run benchmark with the generated model, use -m to specify the model path, -p to specify the prompt processed, -n to specify the number of token to generate\npython utils/e2e_benchmark.py -m models/dummy-bitnet-125m.tl1.gguf -p 512 -n 128\n```\n### FAQ (Frequently Asked Questions)📌 \n\n#### Q1: The build dies with errors building llama.cpp due to issues with std::chrono in log.cpp?\n\n**A:**\nThis is an issue introduced in recent version of llama.cpp. Please refer to this [commit](https://github.com/tinglou/llama.cpp/commit/4e3db1e3d78cc1bcd22bcb3af54bd2a4628dd323) in the [discussion](https://github.com/abetlen/llama-cpp-python/issues/1942) to fix this issue.\n\n#### Q2: How to build with clang in conda environment on windows?\n\n**A:** \nBefore building the project, verify your clang installation and access to Visual Studio tools by running:\n```\nclang -v\n```\n\nThis command checks that you are using the correct version of clang and that the Visual Studio tools are available. If you see an error message such as:\n```\n'clang' is not recognized as an internal or external command, operable program or batch file.\n```\n\nIt indicates that your command line window is not properly initialized for Visual Studio tools.\n\n• If you are using Command Prompt, run:\n```\n\"C:\\Program Files\\Microsoft Visual Studio\\2022\\Professional\\Common7\\Tools\\VsDevCmd.bat\" -startdir=none -arch=x64 -host_arch=x64\n```\n\n• If you are using Windows PowerShell, run the following commands:\n```\nImport-Module \"C:\\Program Files\\Microsoft Visual Studio\\2022\\Professional\\Common7\\Tools\\Microsoft.VisualStudio.DevShell.dll\" Enter-VsDevShell 3f0e31ad -SkipAutomaticLocation -DevCmdArguments \"-arch=x64 -host_arch=x64\"\n```\n\nThese steps will initialize your environment and allow you to use the correct Visual Studio tools.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmicrosoft%2Fbitnet","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmicrosoft%2Fbitnet","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmicrosoft%2Fbitnet/lists"}