https://github.com/bitsandbytes-foundation/bitsandbytes
Accessible large language models via k-bit quantization for PyTorch.
https://github.com/bitsandbytes-foundation/bitsandbytes
llm machine-learning pytorch qlora quantization
Last synced: about 1 month ago
JSON representation
Accessible large language models via k-bit quantization for PyTorch.
- Host: GitHub
- URL: https://github.com/bitsandbytes-foundation/bitsandbytes
- Owner: bitsandbytes-foundation
- License: mit
- Created: 2021-06-04T00:10:34.000Z (about 4 years ago)
- Default Branch: main
- Last Pushed: 2025-05-08T15:46:50.000Z (about 2 months ago)
- Last Synced: 2025-05-08T16:22:37.986Z (about 2 months ago)
- Topics: llm, machine-learning, pytorch, qlora, quantization
- Language: Python
- Homepage: https://huggingface.co/docs/bitsandbytes/main/en/index
- Size: 2.78 MB
- Stars: 6,987
- Watchers: 49
- Forks: 693
- Open Issues: 197
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
Awesome Lists containing this project
- awesome-llmops - bitsandbytes - bit quantization for PyTorch. |  | (Performance / ML Compiler)
- awesome-production-machine-learning - bitsandbytes - foundation/bitsandbytes.svg?style=social) - Bitsandbytes library is a lightweight Python wrapper around CUDA custom functions, in particular 8-bit optimizers, matrix multiplication (LLM.int8()), and 8 & 4-bit quantization functions. (Computation Optimisation)
README
bitsandbytes
`bitsandbytes` enables accessible large language models via k-bit quantization for PyTorch. We provide three main features for dramatically reducing memory consumption for inference and training:
* 8-bit optimizers uses block-wise quantization to maintain 32-bit performance at a small fraction of the memory cost.
* LLM.int8() or 8-bit quantization enables large language model inference with only half the required memory and without any performance degradation. This method is based on vector-wise quantization to quantize most features to 8-bits and separately treating outliers with 16-bit matrix multiplication.
* QLoRA or 4-bit quantization enables large language model training with several memory-saving techniques that don't compromise performance. This method quantizes a model to 4-bits and inserts a small set of trainable low-rank adaptation (LoRA) weights to allow training.The library includes quantization primitives for 8-bit & 4-bit operations, through `bitsandbytes.nn.Linear8bitLt` and `bitsandbytes.nn.Linear4bit` and 8-bit optimizers through `bitsandbytes.optim` module.
## System Requirements
bitsandbytes has the following minimum requirements for all platforms:* Python 3.9+
* [PyTorch](https://pytorch.org/get-started/locally/) 2.2+
* _Note: While we aim to provide wide backwards compatibility, we recommend using the latest version of PyTorch for the best experience._#### Accelerator support:
Platform
Accelerator
Hardware Requirements
Support Status
🐧 Linux
x86-64
◻️ CPU
〰️ Partial Support
🟩 NVIDIA GPU
SM50+ minimum
SM75+ recommended
✅ Full Support *
🟥 AMD GPU
gfx90a, gfx942, gfx1100
🚧 In Development
🟦 Intel XPU
Data Center GPU Max Series (Ponte Vecchio)
Arc A-Series (Alchemist)
Arc B-Series (Battlemage)
🚧 In Development
aarch64
◻️ CPU
〰️ Partial Support
🟩 NVIDIA GPU
SM75, SM80, SM90, SM100
✅ Full Support *
🪟 Windows
x86-64
◻️ CPU
AVX2
〰️ Partial Support
🟩 NVIDIA GPU
SM50+ minimum
SM75+ recommended
✅ Full Support *
🟦 Intel XPU
Arc A-Series (Alchemist)
Arc B-Series (Battlemage)
🚧 In Development
🍎 macOS
arm64
◻️ CPU / Metal
Apple M1+
❌ Under consideration
\* Accelerated INT8 requires SM75+.
## :book: Documentation
* [Official Documentation](https://huggingface.co/docs/bitsandbytes/main)
* 🤗 [Transformers](https://huggingface.co/docs/transformers/quantization/bitsandbytes)
* 🤗 [Diffusers](https://huggingface.co/docs/diffusers/quantization/bitsandbytes)
* 🤗 [PEFT](https://huggingface.co/docs/peft/developer_guides/quantization#quantize-a-model)## :heart: Sponsors
The continued maintenance and development of `bitsandbytes` is made possible thanks to the generous support of our sponsors. Their contributions help ensure that we can keep improving the project and delivering valuable updates to the community.## License
`bitsandbytes` is MIT licensed.We thank Fabio Cannizzo for his work on [FastBinarySearch](https://github.com/fabiocannizzo/FastBinarySearch) which we use for CPU quantization.
## How to cite us
If you found this library useful, please consider citing our work:### QLoRA
```bibtex
@article{dettmers2023qlora,
title={Qlora: Efficient finetuning of quantized llms},
author={Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
journal={arXiv preprint arXiv:2305.14314},
year={2023}
}
```### LLM.int8()
```bibtex
@article{dettmers2022llmint8,
title={LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale},
author={Dettmers, Tim and Lewis, Mike and Belkada, Younes and Zettlemoyer, Luke},
journal={arXiv preprint arXiv:2208.07339},
year={2022}
}
```### 8-bit Optimizers
```bibtex
@article{dettmers2022optimizers,
title={8-bit Optimizers via Block-wise Quantization},
author={Dettmers, Tim and Lewis, Mike and Shleifer, Sam and Zettlemoyer, Luke},
journal={9th International Conference on Learning Representations, ICLR},
year={2022}
}
```