An open API service indexing awesome lists of open source software.

https://github.com/bitsandbytes-foundation/bitsandbytes

Accessible large language models via k-bit quantization for PyTorch.
https://github.com/bitsandbytes-foundation/bitsandbytes

llm machine-learning pytorch qlora quantization

Last synced: about 1 month ago
JSON representation

Accessible large language models via k-bit quantization for PyTorch.

Awesome Lists containing this project

README

          


bitsandbytes



License
Downloads
Nightly Unit Tests
GitHub Release
PyPI - Python Version

`bitsandbytes` enables accessible large language models via k-bit quantization for PyTorch. We provide three main features for dramatically reducing memory consumption for inference and training:

* 8-bit optimizers uses block-wise quantization to maintain 32-bit performance at a small fraction of the memory cost.
* LLM.int8() or 8-bit quantization enables large language model inference with only half the required memory and without any performance degradation. This method is based on vector-wise quantization to quantize most features to 8-bits and separately treating outliers with 16-bit matrix multiplication.
* QLoRA or 4-bit quantization enables large language model training with several memory-saving techniques that don't compromise performance. This method quantizes a model to 4-bits and inserts a small set of trainable low-rank adaptation (LoRA) weights to allow training.

The library includes quantization primitives for 8-bit & 4-bit operations, through `bitsandbytes.nn.Linear8bitLt` and `bitsandbytes.nn.Linear4bit` and 8-bit optimizers through `bitsandbytes.optim` module.

## System Requirements
bitsandbytes has the following minimum requirements for all platforms:

* Python 3.10+
* [PyTorch](https://pytorch.org/get-started/locally/) 2.3+
* _Note: While we aim to provide wide backwards compatibility, we recommend using the latest version of PyTorch for the best experience._

#### Accelerator support:

Note: this table reflects the status of the current development branch. For the latest stable release, see the
[document in the 0.49.2 tag](https://github.com/bitsandbytes-foundation/bitsandbytes/blob/0.49.2/README.md#accelerator-support).

##### Legend:
🚧 = In Development,
〰️ = Partially Supported,
✅ = Supported,
🐢 = Slow Implementation Supported,
❌ = Not Supported



Platform
Accelerator
Hardware Requirements
LLM.int8()
QLoRA 4-bit
8-bit Optimizers




🐧 Linux, glibc >= 2.24


x86-64
◻️ CPU
Minimum: AVX2
Optimized: AVX512F, AVX512BF16






🟩 NVIDIA GPU
cuda
SM60+ minimum
SM75+ recommended






🟥 AMD GPU
cuda

CDNA: gfx90a, gfx942, gfx950

RDNA: gfx1100, gfx1101, gfx1150, gfx1151, gfx1200, gfx1201







🟦 Intel GPU
xpu

Data Center GPU Max Series

Arc A-Series (Alchemist)

Arc B-Series (Battlemage)



〰️



🟪 Intel Gaudi
hpu
Gaudi2, Gaudi3

〰️



aarch64
◻️ CPU







🟩 NVIDIA GPU
cuda
SM75+





🪟 Windows 11 / Windows Server 2022+


x86-64
◻️ CPU
AVX2






🟩 NVIDIA GPU
cuda
SM60+ minimum
SM75+ recommended






🟦 Intel GPU
xpu

Arc A-Series (Alchemist)

Arc B-Series (Battlemage)



〰️


🍎 macOS 14+


arm64
◻️ CPU
Apple M1+






⬜ Metal
mps
Apple M1+
🐢
🐢

## :book: Documentation
* [Official Documentation](https://huggingface.co/docs/bitsandbytes/main)
* 🤗 [Transformers](https://huggingface.co/docs/transformers/quantization/bitsandbytes)
* 🤗 [Diffusers](https://huggingface.co/docs/diffusers/quantization/bitsandbytes)
* 🤗 [PEFT](https://huggingface.co/docs/peft/developer_guides/quantization#quantize-a-model)

## :heart: Sponsors
The continued maintenance and development of `bitsandbytes` is made possible thanks to the generous support of our sponsors. Their contributions help ensure that we can keep improving the project and delivering valuable updates to the community.

Hugging Face
 
Intel

## License
`bitsandbytes` is MIT licensed.

## How to cite us
If you found this library useful, please consider citing our work:

### QLoRA

```bibtex
@article{dettmers2023qlora,
title={Qlora: Efficient finetuning of quantized llms},
author={Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
journal={arXiv preprint arXiv:2305.14314},
year={2023}
}
```

### LLM.int8()

```bibtex
@article{dettmers2022llmint8,
title={LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale},
author={Dettmers, Tim and Lewis, Mike and Belkada, Younes and Zettlemoyer, Luke},
journal={arXiv preprint arXiv:2208.07339},
year={2022}
}
```

### 8-bit Optimizers

```bibtex
@article{dettmers2022optimizers,
title={8-bit Optimizers via Block-wise Quantization},
author={Dettmers, Tim and Lewis, Mike and Shleifer, Sam and Zettlemoyer, Luke},
journal={9th International Conference on Learning Representations, ICLR},
year={2022}
}
```