{"id":30904510,"url":"https://github.com/11sshukla/model_quantization","last_synced_at":"2025-11-01T00:02:07.422Z","repository":{"id":313472625,"uuid":"1051551390","full_name":"11SShukla/model_quantization","owner":"11SShukla","description":"Quantizing TinyLlama to 8-bit","archived":false,"fork":false,"pushed_at":"2025-09-06T09:23:45.000Z","size":10,"stargazers_count":1,"open_issues_count":0,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2025-09-06T10:15:36.613Z","etag":null,"topics":["accelerator","bitsandbytes","touch","transformer"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/11SShukla.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2025-09-06T08:23:00.000Z","updated_at":"2025-09-06T09:30:09.000Z","dependencies_parsed_at":"2025-09-06T10:16:07.253Z","dependency_job_id":null,"html_url":"https://github.com/11SShukla/model_quantization","commit_stats":null,"previous_names":["11sshukla/model_quantization"],"tags_count":null,"template":false,"template_full_name":null,"purl":"pkg:github/11SShukla/model_quantization","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/11SShukla%2Fmodel_quantization","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/11SShukla%2Fmodel_quantization/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/11SShukla%2Fmodel_quantization/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/11SShukla%2Fmodel_quantization/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/11SShukla","download_url":"https://codeload.github.com/11SShukla/model_quantization/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/11SShukla%2Fmodel_quantization/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":274275159,"owners_count":25254902,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-09-09T02:00:10.223Z","response_time":80,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["accelerator","bitsandbytes","touch","transformer"],"created_at":"2025-09-09T09:33:26.697Z","updated_at":"2025-11-01T00:02:07.406Z","avatar_url":"https://github.com/11SShukla.png","language":"Python","readme":"# model_quantization\n\n## TinyLlama 8-bit Quantization Guide\n\n## 📌 Introduction\nQuantization is a technique used to reduce the memory footprint and improve inference speed of large language models (LLMs) by representing weights with lower precision (e.g., 8-bit integers instead of 16-bit floating point numbers).\n\nIn this project, we successfully **quantized TinyLlama-1.1B-Chat** from FP16 (16-bit floating point) to 8-bit using the `transformers` library and `bitsandbytes`.\n\nThis guide explains:  \n Why quantization is important  \n How to quantize TinyLlama to 8-bit  \n How to **save and reuse** the quantized model  \n How to evaluate performance (loss \u0026 perplexity)  \n Why this approach is useful for others  \n\n---\n\n##  Why Quantization?\nQuantization provides several key benefits:\n\n-  Memory Efficiency  \n  16-bit models require more VRAM/RAM. Converting to 8-bit halves the memory requirements, allowing larger models to fit on smaller GPUs.\n\n-  Faster Inference  \n  8-bit models often have faster inference since they load fewer bytes per weight.\n\n-  Accessibility  \n  People with lower-end GPUs (e.g., 4GB/6GB VRAM) can run models that otherwise wouldn’t fit.\n\n- Cost Efficiency  \n  Lower memory usage = cheaper cloud instances.\n\n**Tradeoff:** Quantization introduces *tiny precision loss*, but for most inference/chat use cases, the difference is negligible.\n\n---\n\n##  Requirements\n\n###  Install Dependencies\nMake sure you have Python 3.9+ and install:\n\n```bash\npip install torch transformers bitsandbytes accelerate\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2F11sshukla%2Fmodel_quantization","html_url":"https://awesome.ecosyste.ms/projects/github.com%2F11sshukla%2Fmodel_quantization","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2F11sshukla%2Fmodel_quantization/lists"}