{"id":24108711,"url":"https://github.com/beomi/bitnet-transformers","last_synced_at":"2025-05-07T06:55:38.854Z","repository":{"id":202571108,"uuid":"707065602","full_name":"Beomi/BitNet-Transformers","owner":"Beomi","description":"0️⃣1️⃣🤗 BitNet-Transformers: Huggingface Transformers Implementation of \"BitNet: Scaling 1-bit Transformers for Large Language Models\" in pytorch with Llama(2) Architecture","archived":false,"fork":false,"pushed_at":"2024-03-17T23:14:36.000Z","size":602,"stargazers_count":302,"open_issues_count":8,"forks_count":31,"subscribers_count":9,"default_branch":"main","last_synced_at":"2025-05-07T06:55:34.603Z","etag":null,"topics":["llm","quantization","quantization-aware-training","transformers"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Beomi.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-10-19T06:47:28.000Z","updated_at":"2025-05-04T15:09:31.000Z","dependencies_parsed_at":null,"dependency_job_id":"e8a1ab0c-c173-4498-8e46-c2c88e048273","html_url":"https://github.com/Beomi/BitNet-Transformers","commit_stats":null,"previous_names":["beomi/bitnet-transformers"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Beomi%2FBitNet-Transformers","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Beomi%2FBitNet-Transformers/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Beomi%2FBitNet-Transformers/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Beomi%2FBitNet-Transformers/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Beomi","download_url":"https://codeload.github.com/Beomi/BitNet-Transformers/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":252831311,"owners_count":21810783,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["llm","quantization","quantization-aware-training","transformers"],"created_at":"2025-01-10T23:57:06.638Z","updated_at":"2025-05-07T06:55:38.848Z","avatar_url":"https://github.com/Beomi.png","language":"Python","readme":"# 0️⃣1️⃣🤗 BitNet-Transformers: Huggingface Transformers Implementation of \"BitNet: Scaling 1-bit Transformers for Large Language Models\" in pytorch with Llama(2) Architecture\r\n\r\n![BitNet Architecture](./static/bitnet-arch.png)\r\n\r\n![BitNet](./static/bitnet.png)\r\n\r\n- Paper Link: https://arxiv.org/pdf/2310.11453.pdf\r\n\r\n## Prepare Dev env\r\n\r\n```bash\r\n# Clone this repo\r\ngit clone https://github.com/beomi/bitnet-transformers\r\ncd bitnet-transformers\r\n\r\n# Install requirements\r\npip install -r clm_requirements.txt\r\n\r\n# Clone transformers repo\r\ngit clone https://github.com/huggingface/transformers\r\npip install -e transformers\r\n\r\n# Update Llama(2) model\r\nrm ./transformers/src/transformers/models/llama/modeling_llama.py\r\nln -s $(pwd)/bitnet_llama/modeling_llama.py ./transformers/src/transformers/models/llama/modeling_llama.py\r\n```\r\n\r\nWe'll overwrite `bitnet_llama/modeling_llama.py` into `transformers`. Since the file is linked, any changes made to the file will be reflected in the `transformers` repo.\r\n\r\n## Train Wikitext-103\r\n\r\n![Train Loss Graph when train BitLLAMA using Wikitext-103](./static/W\u0026B_Chart_2023.10.20_wikitext.png)\r\n\r\n\u003e You can track metrics via wandb\r\n\r\n```bash\r\n./train_wikitext.sh\r\n```\r\n\r\n## GPU Mem Usage Comparison\r\n\r\n**Train Config**\r\n\r\n- Batch size: 1\r\n- Gradient accumulation: 1\r\n- Seq length: 2048\r\n- Model: `LLamaForCausalLM` with `BitLinear` layer\r\n- Model size: 47,452,672 (47.5M)\r\n\r\n**Original LLAMA - 16bit**\r\n\r\n- Uses **250MB** GPU memory for Model weights\r\n\r\n**BitLLAMA - Mixed 16bit**\r\n\r\n- Uses **200MB** GPU memory for Model weights\r\n- Use bf16(or fp16) to store model weights\r\n- Use int8 to store `-1`/`1` 1-bit weights\r\n- Use more memory when training than original LLAMA: It saves 1-bit weight and 16bit weight together\r\n\r\n**BitLLAMA - 8bit**\r\n\r\n- Uses **100MB** GPU memory for Model weights\r\n- Use bf16(or fp16) on-the-fly when needed\r\n- Use 8bit to save 1-bit BitLinear weight \u0026 other weights\r\n\r\n**BitLLAMA - 1bit**\r\n\r\n- Use bf16(or fp16) on-the-fly when needed\r\n- Use 1bit to save 1-bit weight\r\n\r\n```bash\r\nTBD\r\n```\r\n\r\n## Todo\r\n\r\n- [x] Add `BitLinear` layer\r\n- [x] Add `LLamaForCausalLM` model with `BitLinear` layer\r\n    - [x] Update `.save_pretrained` method (for 1-bit weight saving)\r\n- [x] Add sample code for LM training\r\n- [ ] Update `BitLinear` layer to use 1-bit weight\r\n    - [ ] Use uint8 instead of bfloat16\r\n    - [ ] Use custom cuda kernel for 1-bit weight\r\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbeomi%2Fbitnet-transformers","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fbeomi%2Fbitnet-transformers","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbeomi%2Fbitnet-transformers/lists"}