{"id":32208768,"url":"https://github.com/e-sensing/torchopt","last_synced_at":"2026-02-19T23:01:21.816Z","repository":{"id":40286259,"uuid":"478994202","full_name":"e-sensing/torchopt","owner":"e-sensing","description":"R implementation of advanced optimizers for torch","archived":false,"fork":false,"pushed_at":"2023-06-08T19:23:14.000Z","size":4597,"stargazers_count":26,"open_issues_count":0,"forks_count":5,"subscribers_count":6,"default_branch":"main","last_synced_at":"2025-12-09T14:09:08.782Z","etag":null,"topics":["deep-learning","numerical-optimization"],"latest_commit_sha":null,"homepage":"","language":"R","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/e-sensing.png","metadata":{"files":{"readme":"README.Rmd","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2022-04-07T13:19:42.000Z","updated_at":"2023-07-14T01:26:27.000Z","dependencies_parsed_at":"2022-08-09T16:02:57.475Z","dependency_job_id":"509526d9-08a0-4d36-bab9-17803b58814e","html_url":"https://github.com/e-sensing/torchopt","commit_stats":{"total_commits":64,"total_committers":4,"mean_commits":16.0,"dds":0.5625,"last_synced_commit":"399f27b52ac09105ed4b1b1729ac76db73987d0d"},"previous_names":[],"tags_count":2,"template":false,"template_full_name":null,"purl":"pkg:github/e-sensing/torchopt","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/e-sensing%2Ftorchopt","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/e-sensing%2Ftorchopt/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/e-sensing%2Ftorchopt/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/e-sensing%2Ftorchopt/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/e-sensing","download_url":"https://codeload.github.com/e-sensing/torchopt/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/e-sensing%2Ftorchopt/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29636035,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-02-19T22:32:43.237Z","status":"ssl_error","status_checked_at":"2026-02-19T22:32:38.330Z","response_time":117,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.6:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["deep-learning","numerical-optimization"],"created_at":"2025-10-22T06:01:19.687Z","updated_at":"2026-02-19T23:01:21.809Z","avatar_url":"https://github.com/e-sensing.png","language":"R","readme":"---\noutput: github_document\neditor_options: \n  chunk_output_type: console\n  markdown: \n    wrap: 72\n---\n\n\u003c!-- README.md is generated from README.Rmd. Please edit that file --\u003e\n\n```{r, include = FALSE}\nknitr::opts_chunk$set(\n  collapse = TRUE,\n  comment = \"#\u003e\",\n  fig.path = \"man/figures/README-\",\n  out.width = \"100%\"\n)\n```\n\n# torchopt\n\n\u003c!-- badges: start --\u003e\n\n[![R-CMD-check](https://github.com/e-sensing/torchopt/workflows/R-CMD-check/badge.svg)](https://github.com/e-sensing/torchopt/actions)\n[![CRAN\nstatus](https://www.r-pkg.org/badges/version/torchopt)](https://cran.r-project.org/package=torchopt)\n[![Software Life\nCycle](https://img.shields.io/badge/lifecycle-experimental-yellow.svg)](https://lifecycle.r-lib.org/articles/stages.html)\n[![Software\nLicense](https://img.shields.io/badge/license-Apache%202-2--green)](https://www.apache.org/licenses/LICENSE-2.0)\n\n\u003c!-- badges: end --\u003e\n\nThe `torchopt` package provides R implementation of deep learning optimizers proposed in the literature. It is intended to support the use of the torch package in R.\n\n## Installation\n\nInstalling the CRAN (stable) version of `torchopt`:\n\n```{r, eval = FALSE}\ninstall.packages(\"torchopt\")\n```\n\nInstalling the development version of `torchopt` do as :\n\n```{r, eval = FALSE}\nlibrary(devtools)\ninstall_github(\"e-sensing/torchopt\")\n```\n\n```{r, echo = FALSE}\nlibrary(torch)\nif (!torch::torch_is_installed())\n    torch::install_torch()\nlibrary(torchopt)\n```\n\n## Provided optimizers\n\n`torchopt` package provides the following R implementations of torch\noptimizers:\n\n-   `optim_adamw()`: AdamW optimizer proposed by Loshchilov \u0026 Hutter\n    (2019). Converted from the `pytorch` code developed by Collin\n    Donahue-Oponski available at\n    \u003chttps://gist.github.com/colllin/0b146b154c4351f9a40f741a28bff1e3\u003e\n\n-   `optim_adabelief()`: Adabelief optimizer proposed by Zhuang et al\n    (2020). Converted from the authors' PyTorch code:\n    \u003chttps://github.com/juntang-zhuang/Adabelief-Optimizer\u003e.\n\n-   `optim_adabound()`: Adabound optimizer proposed by Luo et al.(2019).\n    Converted from the authors' PyTorch code:\n    \u003chttps://github.com/Luolc/AdaBound\u003e.\n    \n-   `optim_adahessian()`: Adahessian optimizer proposed by Yao et al.(2021).\n    Converted from the authors' PyTorch code:\n    \u003chttps://github.com/amirgholami\u003e.\n\n-   `optim_madgrad()`: Momentumized, Adaptive, Dual Averaged Gradient\n    Method for Stochastic Optimization (MADGRAD) optimizer proposed by\n    Defazio \u0026 Jelassi (2021). The function is imported from\n    [madgrad](https://CRAN.R-project.org/package=madgrad) package and\n    the source code is available at \u003chttps://github.com/mlverse/madgrad\u003e\n\n-   `optim_nadam()`: Incorporation of Nesterov Momentum into Adam\n    proposed by Dozat (2016). Converted from the PyTorch site\n    \u003chttps://github.com/pytorch/pytorch\u003e.\n\n-   `optim_qhadam()`: Quasi-hyperbolic version of Adam proposed by Ma\n    and Yarats(2019). Converted from the code developed by Meta AI:\n    \u003chttps://github.com/facebookresearch/qhoptim\u003e.\n\n-   `optim_radam()`: Rectified verison of Adam proposed by Liu et al.\n    (2019). Converted from the PyTorch code \n    \u003chttps://github.com/pytorch/pytorch\u003e.\n    \n-   `optim_swats()`: Optimizer that switches from Adam to SGD proposed by \n    Keskar and Socher(2018). \n    Converted from  the `pytorch` code developed by Patrik Purgai:\n    \u003chttps://github.com/Mrpatekful/swats\u003e\n    \n-   `optim_yogi()`: Yogi optimizer proposed by Zaheer et al.(2019).\n     Converted from  the `pytorch` code developed by Nikolay Novik:\n    \u003chttps://github.com/jettify/pytorch-optimizer\u003e\n\n## Optimization test functions\n\nYou can also test optimizers using optimization [test\nfunctions](https://en.wikipedia.org/wiki/Test_functions_for_optimization)\nprovided by `torchopt` including `\"ackley\"`, `\"beale\"`, `\"booth\"`,\n`\"bukin_n6\"`, `\"easom\"`, `\"goldstein_price\"`, `\"himmelblau\"`,\n`\"levi_n13\"`, `\"matyas\"`, `\"rastrigin\"`, `\"rosenbrock\"`, `\"sphere\"`.\nOptimization functions are useful to evaluate characteristics of\noptimization algorithms, such as convergence rate, precision,\nrobustness, and performance. These functions give an idea about the\ndifferent situations that optimization algorithms can face.\n\nIn what follows, we perform tests using `\"beale\"` test function. To\nvisualize an animated GIF, we set `plot_each_step=TRUE` and capture each\nstep frame using [gifski](https://CRAN.R-project.org/package=gifski)\npackage.\n\n### `optim_adamw()`:\n\n```{r test_adamw, echo=TRUE, fig.show='animate', fig.height=8, fig.width=8, animation.hook='gifski', aniopts='loop', dpi=96, interval=0.1, out.height='50%', out.width='50%', cache=TRUE}\n\n# test optim adamw\nset.seed(12345)\ntorchopt::test_optim(\n    optim = torchopt::optim_adamw,\n    test_fn = \"beale\",\n    opt_hparams = list(lr = 0.1),\n    steps = 500,\n    plot_each_step = TRUE\n)\n\n```\n\n### `optim_adabelief()`:\n\n```{r test_adabelief, echo=TRUE, fig.show='animate', fig.height=8, fig.width=8, animation.hook='gifski', aniopts='loop', dpi=96, interval=0.1, out.height='50%', out.width='50%', cache=TRUE}\n\nset.seed(42)\ntest_optim(\n    optim = optim_adabelief,\n    opt_hparams = list(lr = 0.5),\n    steps = 400,\n    test_fn = \"beale\",\n    plot_each_step = TRUE\n)\n```\n\n### `optim_adabound()`:\n\n```{r test_adabound, echo=TRUE, fig.show='animate', fig.height=8, fig.width=8, animation.hook='gifski', aniopts='loop', dpi=96, interval=0.1, out.height='50%', out.width='50%', cache=TRUE}\n\n# set manual seed\nset.seed(22)\ntest_optim(\n    optim = optim_adabound,\n    opt_hparams = list(lr = 0.5),\n    steps = 400,\n    test_fn = \"beale\",\n    plot_each_step = TRUE\n)\n\n```\n\n### `optim_adahessian()`:\n\n```{r test_adahessian, echo=TRUE, fig.show='animate', fig.height=8, fig.width=8, animation.hook='gifski', aniopts='loop', dpi=96, interval=0.1, out.height='50%', out.width='50%', cache=TRUE}\n\n# set manual seed\nset.seed(290356)\ntest_optim(\n    optim = optim_adahessian,\n    opt_hparams = list(lr = 0.2),\n    steps = 500,\n    test_fn = \"beale\",\n    plot_each_step = TRUE\n)\n\n```\n\n### `optim_madgrad()`:\n\n```{r test_madgrad, echo=TRUE, fig.show='animate', fig.height=8, fig.width=8, animation.hook='gifski', aniopts='loop', dpi=96, interval=0.1, out.height='50%', out.width='50%', cache=TRUE}\n\nset.seed(256)\ntest_optim(\n    optim = optim_madgrad,\n    opt_hparams = list(lr = 0.05),\n    steps = 400,\n    test_fn = \"beale\",\n    plot_each_step = TRUE\n)\n\n```\n\n### `optim_nadam()`:\n\n```{r test_nadam, echo=TRUE, fig.show='animate', fig.height=8, fig.width=8, animation.hook='gifski', aniopts='loop', dpi=96, interval=0.1, out.height='50%', out.width='50%', cache=TRUE}\n\nset.seed(2903)\ntest_optim(\n    optim = optim_nadam,\n    opt_hparams = list(lr = 0.5, weight_decay = 0),\n    steps = 500,\n    test_fn = \"beale\",\n    plot_each_step = TRUE\n)\n\n```\n\n### `optim_qhadam()`:\n\n```{r test_qhadam, echo=TRUE, fig.show='animate', fig.height=8, fig.width=8, animation.hook='gifski', aniopts='loop', dpi=96, interval=0.1, out.height='50%', out.width='50%', cache=TRUE}\n\nset.seed(1024)\ntest_optim(\n    optim = optim_qhadam,\n    opt_hparams = list(lr = 0.1),\n    steps = 500,\n    test_fn = \"beale\",\n    plot_each_step = TRUE\n)\n\n```\n\n\n### `optim_radam()`:\n\n```{r test_radam, echo=TRUE, fig.show='animate', fig.height=8, fig.width=8, animation.hook='gifski', aniopts='loop', dpi=96, interval=0.1, out.height='50%', out.width='50%', cache=TRUE}\n\nset.seed(1024)\ntest_optim(\n    optim = optim_radam,\n    opt_hparams = list(lr = 1.0),\n    steps = 500,\n    test_fn = \"beale\",\n    plot_each_step = TRUE\n)\n\n```\n\n\n### `optim_swats()`:\n\n```{r test_swats, echo=TRUE, fig.show='animate', fig.height=8, fig.width=8, animation.hook='gifski', aniopts='loop', dpi=96, interval=0.1, out.height='50%', out.width='50%', cache=TRUE}\n\nset.seed(234)\ntest_optim(\n    optim = optim_swats,\n    opt_hparams = list(lr = 0.5),\n    steps = 500,\n    test_fn = \"beale\",\n    plot_each_step = TRUE\n)\n\n```\n\n### `optim_yogi()`:\n\n```{r test_yogi, echo=TRUE, fig.show='animate', fig.height=8, fig.width=8, animation.hook='gifski', aniopts='loop', dpi=96, interval=0.1, out.height='50%', out.width='50%', cache=TRUE}\n\n# set manual seed\nset.seed(66)\ntest_optim(\n    optim = optim_yogi,\n    opt_hparams = list(lr = 0.1),\n    steps = 500,\n    test_fn = \"beale\",\n    plot_each_step = TRUE\n)\n\n```\n\n## Acknowledgements\n\nWe are thankful to Collin Donahue-Oponski \u003chttps://github.com/colllin\u003e,\nAmir Gholami \u003chttps://github.com/amirgholami\u003e, \nLiangchen Luo \u003chttps://github.com/Luolc\u003e, Liyuan Liu\n\u003chttps://github.com/LiyuanLucasLiu\u003e, Nikolay Novik \u003chttps://github.com/jettify\u003e, Patrik Purgai \u003chttps://github.com/Mrpatekful\u003e Juntang Zhuang \u003chttps://github.com/juntang-zhuang\u003e and the PyTorch team  \u003chttps://github.com/pytorch/pytorch\u003e for providing pytorch code for the optimizers implemented in this package. We also thank Daniel Falbel \u003chttps://github.com/dfalbel\u003e for providing support\nfor the R version of PyTorch.\n\n## Code of Conduct\n\nThe torchopt project is released with a [Contributor\nCode of Conduct](https://contributor-covenant.org/version/2/0/CODE_OF_CONDUCT.html).\nBy contributing to this project, you agree to abide by its terms.\n\n## References\n\n-   ADABELIEF: Juntang Zhuang, Tommy Tang, Yifan Ding, Sekhar Tatikonda, Nicha\n    Dvornek, Xenophon Papademetris, James S. Duncan. \"Adabelief\n    Optimizer: Adapting Stepsizes by the Belief in Observed Gradients\",\n    34th Conference on Neural Information Processing Systems (NeurIPS\n    2020), \u003chttps://arxiv.org/abs/2010.07468\u003e.\n\n-   ADABOUND: Liangchen Luo, Yuanhao Xiong, Yan Liu, Xu Sun, \"Adaptive Gradient\n    Methods with Dynamic Bound of Learning Rate\", International\n    Conference on Learning Representations (ICLR), 2019.\n    \u003chttps://doi.org/10.48550/arXiv.1902.09843\u003e.\n    \n-   ADAHESSIAN: Zhewei Yao, Amir Gholami, Sheng Shen, Mustafa Mustafa, Kurt Keutzer, \n    Michael W. Mahoney. \"Adahessian: An Adaptive Second Order Optimizer \n    for Machine Learning\", AAAI Conference on Artificial Intelligence, 35(12),\n    10665-10673, 2021. \u003chttps://arxiv.org/abs/2006.00719\u003e.\n    \n-   ADAMW: Ilya Loshchilov, Frank Hutter, \"Decoupled Weight Decay\n    Regularization\", International Conference on Learning\n    Representations (ICLR) 2019.\n    \u003chttps://doi.org/10.48550/arXiv.1711.05101\u003e.\n\n-   MADGRAD: Aaron Defazio, Samy Jelassi, \"Adaptivity without Compromise: A\n    Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic\n    Optimization\", arXiv preprint arXiv:2101.11075, 2021.\n    \u003chttps://doi.org/10.48550/arXiv.2101.11075\u003e\n    \n-   NADAM: Timothy Dazat, \"Incorporating Nesterov Momentum into Adam\",\n    International Conference on Learning Representations (ICLR), 2019.\n    \u003chttps://openreview.net/pdf/OM0jvwB8jIp57ZJjtNEZ.pdf\u003e\n    \n-   QHADAM: Jerry Ma, Denis Yarats, \"Quasi-hyperbolic momentum and Adam \n    for deep learning\". \u003chttps://arxiv.org/abs/1810.06801\u003e\n\n-   RADAM: Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu,\n    Jianfeng Gao, Jiawei Han, \"On the Variance of the Adaptive Learning\n    Rate and Beyond\", International Conference on Learning\n    Representations (ICLR) 2020. \u003chttps://arxiv.org/abs/1908.03265\u003e.\n    \n-   SWATS: Nitish Keskar, Richard Socher, \"Improving Generalization Performance \n    by Switching from Adam to SGD\". \n    International Conference on Learning Representations (ICLR), 2018.\n    \u003chttps://arxiv.org/abs/1712.07628\u003e.\n    \n-   YOGI: Manzil Zaheer, Sashank Reddi, Devendra Sachan, Satyen Kale, Sanjiv\n    Kumar, \"Adaptive Methods for Nonconvex Optimization\", Advances in\n    Neural Information Processing Systems 31 (NeurIPS 2018).\n    \u003chttps://papers.nips.cc/paper/8186-adaptive-methods-for-nonconvex-optimization\u003e\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fe-sensing%2Ftorchopt","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fe-sensing%2Ftorchopt","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fe-sensing%2Ftorchopt/lists"}