{"id":13576308,"url":"https://github.com/warner-benjamin/optimi","last_synced_at":"2025-12-27T02:49:21.728Z","repository":{"id":203914131,"uuid":"709575319","full_name":"warner-benjamin/optimi","owner":"warner-benjamin","description":"Fast, Modern, Memory Efficient, and Low Precision PyTorch Optimizers","archived":false,"fork":false,"pushed_at":"2024-07-15T23:59:49.000Z","size":782,"stargazers_count":88,"open_issues_count":4,"forks_count":3,"subscribers_count":3,"default_branch":"main","last_synced_at":"2025-04-02T19:06:51.108Z","etag":null,"topics":["deep-learning","optimizers","pytorch"],"latest_commit_sha":null,"homepage":"https://optimi.benjaminwarner.dev","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/warner-benjamin.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-10-25T00:51:05.000Z","updated_at":"2025-03-26T02:32:19.000Z","dependencies_parsed_at":null,"dependency_job_id":"3c565699-6f2c-44b2-9a21-c9d6af6e8614","html_url":"https://github.com/warner-benjamin/optimi","commit_stats":null,"previous_names":["warner-benjamin/optimi"],"tags_count":5,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/warner-benjamin%2Foptimi","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/warner-benjamin%2Foptimi/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/warner-benjamin%2Foptimi/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/warner-benjamin%2Foptimi/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/warner-benjamin","download_url":"https://codeload.github.com/warner-benjamin/optimi/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248112994,"owners_count":21049767,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["deep-learning","optimizers","pytorch"],"created_at":"2024-08-01T15:01:09.032Z","updated_at":"2025-12-27T02:49:21.722Z","avatar_url":"https://github.com/warner-benjamin.png","language":"Python","readme":"# optimī\n\n### Fast, Modern, and Low Precision PyTorch Optimizers\n\n[![Python Versions](https://img.shields.io/pypi/pyversions/torch-optimi)](https://pypi.org/project/torch-optimi/)\n[![PyPI Version](https://img.shields.io/pypi/v/torch-optimi)](https://pypi.org/project/torch-optimi/)\n[![PyPI Downloads](https://static.pepy.tech/badge/torch-optimi/month)](https://pepy.tech/projects/torch-optimi)\n[![Documentation](https://img.shields.io/badge/docs-available-brightgreen)](https://optimi.benjaminwarner.dev)\n\noptimi enables accurate low precision training via Kahan summation, integrates gradient release and optimizer accumulation for additional memory efficiency, supports fully decoupled weight decay, and features fast implementations of modern optimizers.\n\n## Low Precision Training with Kahan Summation\n\noptimi optimizers can match the performance of mixed precision when [training in pure BFloat16 by using Kahan summation](https://optimi.benjaminwarner.dev/kahan_summation).\n\n![](https://ghp-cdn.benjaminwarner.dev/optimi/kahan_pretrain.png)\n\nTraining in BFloat16 with Kahan summation can reduce non-activation training memory usage by [37 to 45 percent](https://optimi.benjaminwarner.dev/kahan_summation/#memory-savings) when using an Adam optimizer. BFloat16 training can increase single GPU [training speed up to 10 percent](https://optimi.benjaminwarner.dev/kahan_summation/#training-speedup) at the same batch size.\n\n## Fast Triton Implementations\n\noptimi's fused [Triton optimizers](https://optimi.benjaminwarner.dev/triton) are faster than PyTorch's fused Cuda optimizers, and nearly as fast as compiled optimizers without any hassle.\n\n![](https://ghp-cdn.benjaminwarner.dev/optimi/adamw_speed.png)\n\noptimi's Triton backend supports modern NVIDIA (Ampere or newer), AMD, and Intel GPUs, and is enabled by default for all optimizers.\n\n## Fully Decoupled Weight Decay\n\nIn addition to supporting PyTorch-style decoupled weight decay, optimi optimizers also support [fully decoupled weight decay](https://optimi.benjaminwarner.dev/fully_decoupled_weight_decay).\n\nFully decoupled weight decay decouples weight decay from the learning rate, more accurately following [*Decoupled Weight Decay Regularization*](https://arxiv.org/abs/1711.05101). This can help simplify hyperparameter tuning as the optimal weight decay is no longer tied to the learning rate.\n\n## Gradient Release: Fused Backward and Optimizer Step\n\noptimi optimizers can perform the [optimization step layer-by-layer during the backward pass](https://optimi.benjaminwarner.dev/gradient_release), immediately freeing gradient memory.\n\nUnlike the current PyTorch implementation, optimi’s gradient release optimizers are a drop-in replacement for standard optimizers and seamlessly work with exisiting hyperparmeter schedulers.\n\n## Optimizer Accumulation: Gradient Release and Accumulation\n\noptimi optimizers can approximate gradient accumulation with gradient release by [accumulating gradients into the optimizer states](https://optimi.benjaminwarner.dev/optimizer_accumulation).\n\n## Documentation\n\n\u003chttps://optimi.benjaminwarner.dev\u003e\n\n## Optimizers\n\noptimi implements the following optimizers:\n- [Adam](https://optimi.benjaminwarner.dev/optimizers/adam)\n- [AdamW](https://optimi.benjaminwarner.dev/optimizers/adamw)\n- [Adan](https://optimi.benjaminwarner.dev/optimizers/adan)\n- [Lion](https://optimi.benjaminwarner.dev/optimizers/lion)\n- [RAdam](https://optimi.benjaminwarner.dev/optimizers/radam)\n- [Ranger](https://optimi.benjaminwarner.dev/optimizers/ranger)\n- [SGD](https://optimi.benjaminwarner.dev/optimizers/sgd)\n- [StableAdamW](https://optimi.benjaminwarner.dev/optimizers/stableadamw)\n\n## Install\n\noptimi is available to install from pypi.\n\n```bash\npip install torch-optimi\n```\n\n## Usage\n\nTo use an optimi optimizer with Kahan summation and fully decoupled weight decay:\n\n```python\nimport torch\nfrom torch import nn\nfrom optimi import AdamW\n\n# create or cast model in low precision (bfloat16)\nmodel = nn.Linear(20, 1, dtype=torch.bfloat16)\n\n# initialize any optimi optimizer with parameters \u0026 fully decoupled weight decay\n# Kahan summation is automatically enabled since model \u0026 inputs are bfloat16\nopt = AdamW(model.parameters(), lr=1e-3, weight_decay=1e-5, decouple_lr=True)\n\n# forward and backward, casting input to bfloat16 if needed\nloss = model(torch.randn(20, dtype=torch.bfloat16))\nloss.backward()\n\n# optimizer step\nopt.step()\nopt.zero_grad()\n```\n\nTo use with PyTorch-style weight decay with float32 or mixed precision:\n\n```python\n# create model\nmodel = nn.Linear(20, 1)\n\n# initialize any optimi optimizer with parameters\nopt = AdamW(model.parameters(), lr=1e-3, weight_decay=1e-2)\n```\n\nTo use with gradient release:\n\n```python\n# initialize any optimi optimizer with `gradient_release=True`\n# and call `prepare_for_gradient_release` on model and optimizer\nopt = AdamW(model.parameters(), lr=1e-3, gradient_release=True)\nprepare_for_gradient_release(model, opt)\n\n# setup a learning rate scheduler like normal\nscheduler = CosineAnnealingLR(opt, ...)\n\n# calling backward on the model will peform the optimzier step\nloss = model(torch.randn(20, dtype=torch.bfloat16))\nloss.backward()\n\n# optimizer step and zero_grad are no longer needed, and will\n# harmlessly no-op if called by an existing training framework\n# opt.step()\n# opt.zero_grad()\n\n# step the learning rate scheduler like normal\nscheduler.step()\n\n# optionally remove gradient release hooks when done training\nremove_gradient_release(model)\n```\n\nTo use with optimizer accumulation:\n\n```python\n# initialize any optimi optimizer with `gradient_release=True`\n# and call `prepare_for_gradient_release` on model and optimizer\nopt = AdamW(model.parameters(), lr=1e-3, gradient_release=True)\nprepare_for_gradient_release(model, opt)\n\n# update model parameters every four steps after accumulating\n# gradients directly into the optimizer states\naccumulation_steps = 4\n\n# setup a learning rate scheduler for gradient accumulation\nscheduler = CosineAnnealingLR(opt, ...)\n\n# use existing PyTorch dataloader\nfor idx, batch in enumerate(dataloader):\n    # `optimizer_accumulation=True` accumulates gradients into\n    # optimizer states. set `optimizer_accumulation=False` to\n    # update parameters by performing a full gradient release step\n    opt.optimizer_accumulation = (idx+1) % accumulation_steps != 0\n\n    # calling backward on the model will peform the optimizer step\n    # either accumulating gradients or updating model parameters\n    loss = model(batch)\n    loss.backward()\n\n    # optimizer step and zero_grad are no longer needed, and will\n    # harmlessly no-op if called by an existing training framework\n    # opt.step()\n    # opt.zero_grad()\n\n    # step the learning rate scheduler after accumulating gradients\n    if not opt.optimizer_accumulation:\n        scheduler.step()\n\n# optionally remove gradient release hooks when done training\nremove_gradient_release(model)\n```\n\n## Differences from PyTorch\n\noptimi optimizers do not support compilation, differentiation, complex numbers, or have capturable versions.\n\noptimi Adam optimizers do not support AMSGrad and SGD does not support Nesterov momentum. Optimizers which debias updates (Adam optimizers and Adan) calculate the debias term per parameter group, not per parameter.\n","funding_links":[],"categories":["Python"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fwarner-benjamin%2Foptimi","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fwarner-benjamin%2Foptimi","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fwarner-benjamin%2Foptimi/lists"}