{"id":26632907,"url":"https://github.com/TimDettmers/bitsandbytes","last_synced_at":"2025-03-24T15:03:16.558Z","repository":{"id":44988942,"uuid":"373674258","full_name":"bitsandbytes-foundation/bitsandbytes","owner":"bitsandbytes-foundation","description":"Accessible large language models via k-bit quantization for PyTorch.","archived":false,"fork":false,"pushed_at":"2025-03-14T21:21:11.000Z","size":2860,"stargazers_count":6806,"open_issues_count":196,"forks_count":675,"subscribers_count":51,"default_branch":"main","last_synced_at":"2025-03-16T23:48:49.176Z","etag":null,"topics":["llm","machine-learning","pytorch","qlora","quantization"],"latest_commit_sha":null,"homepage":"https://huggingface.co/docs/bitsandbytes/main/en/index","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/bitsandbytes-foundation.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2021-06-04T00:10:34.000Z","updated_at":"2025-03-16T20:22:04.000Z","dependencies_parsed_at":"2023-02-18T04:45:51.974Z","dependency_job_id":"55800dca-cb39-4573-82d7-6d90dfca70ca","html_url":"https://github.com/bitsandbytes-foundation/bitsandbytes","commit_stats":{"total_commits":668,"total_committers":92,"mean_commits":7.260869565217392,"dds":0.6586826347305389,"last_synced_commit":"e4674531dd54874c0abbc786ad5635c92c34dc3e"},"previous_names":["bitsandbytes-foundation/bitsandbytes","timdettmers/bitsandbytes"],"tags_count":26,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bitsandbytes-foundation%2Fbitsandbytes","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bitsandbytes-foundation%2Fbitsandbytes/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bitsandbytes-foundation%2Fbitsandbytes/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bitsandbytes-foundation%2Fbitsandbytes/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/bitsandbytes-foundation","download_url":"https://codeload.github.com/bitsandbytes-foundation/bitsandbytes/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":245294764,"owners_count":20591900,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["llm","machine-learning","pytorch","qlora","quantization"],"created_at":"2025-03-24T15:01:56.612Z","updated_at":"2025-03-24T15:03:16.550Z","avatar_url":"https://github.com/bitsandbytes-foundation.png","language":"Python","readme":"# `bitsandbytes`\n\n[![Downloads](https://static.pepy.tech/badge/bitsandbytes)](https://pepy.tech/project/bitsandbytes) [![Downloads](https://static.pepy.tech/badge/bitsandbytes/month)](https://pepy.tech/project/bitsandbytes) [![Downloads](https://static.pepy.tech/badge/bitsandbytes/week)](https://pepy.tech/project/bitsandbytes)\n\nThe `bitsandbytes` library is a lightweight Python wrapper around CUDA custom functions, in particular 8-bit optimizers, matrix multiplication (LLM.int8()), and 8 \u0026 4-bit quantization functions.\n\nThe library includes quantization primitives for 8-bit \u0026 4-bit operations, through `bitsandbytes.nn.Linear8bitLt` and `bitsandbytes.nn.Linear4bit` and 8-bit optimizers through `bitsandbytes.optim` module.\n\nThere are ongoing efforts to support further hardware backends, i.e. Intel CPU + GPU, AMD GPU, Apple Silicon, hopefully NPU.\n\n**Please head to the official documentation page:**\n\n**[https://huggingface.co/docs/bitsandbytes/main](https://huggingface.co/docs/bitsandbytes/main)**\n\n## License\n\n`bitsandbytes` is MIT licensed.\n\nWe thank Fabio Cannizzo for his work on [FastBinarySearch](https://github.com/fabiocannizzo/FastBinarySearch) which we use for CPU quantization.\n","funding_links":[],"categories":["Model Merging \u0026 Quantization","Core Model/Training Techniques","Tools","🧑‍🔧 Fine-Tuning \u0026 Training Optimization","Fine-tuning \u0026 Quantization (18)","10. Model Preparation \u0026 Quantization","Optimization \u0026 Performance"],"sub_categories":["Quantization Tools","Other","Data \u0026 Alignment Tools","Resources"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FTimDettmers%2Fbitsandbytes","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FTimDettmers%2Fbitsandbytes","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FTimDettmers%2Fbitsandbytes/lists"}