{"id":13564582,"url":"https://github.com/learnables/learn2learn","last_synced_at":"2025-04-09T02:16:20.517Z","repository":{"id":39032261,"uuid":"201314421","full_name":"learnables/learn2learn","owner":"learnables","description":"A PyTorch Library for Meta-learning Research","archived":false,"fork":false,"pushed_at":"2024-06-07T19:21:14.000Z","size":9986,"stargazers_count":2743,"open_issues_count":31,"forks_count":357,"subscribers_count":30,"default_branch":"master","last_synced_at":"2025-04-02T01:14:20.890Z","etag":null,"topics":["few-shot","finetuning","learn2learn","learning2learn","maml","meta-descent","meta-learning","meta-optimization","meta-rl","metalearning","pytorch"],"latest_commit_sha":null,"homepage":"http://learn2learn.net","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/learnables.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":"CITATION.cff","codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2019-08-08T18:22:41.000Z","updated_at":"2025-04-01T02:40:36.000Z","dependencies_parsed_at":"2022-07-11T08:38:08.469Z","dependency_job_id":"4e0056dd-dceb-485e-82e8-34807153c7a6","html_url":"https://github.com/learnables/learn2learn","commit_stats":{"total_commits":287,"total_committers":30,"mean_commits":9.566666666666666,"dds":"0.42160278745644597","last_synced_commit":"0b9d3a3d540646307ca5debf8ad9c79ffe975e1c"},"previous_names":[],"tags_count":12,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/learnables%2Flearn2learn","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/learnables%2Flearn2learn/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/learnables%2Flearn2learn/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/learnables%2Flearn2learn/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/learnables","download_url":"https://codeload.github.com/learnables/learn2learn/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247962605,"owners_count":21024871,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["few-shot","finetuning","learn2learn","learning2learn","maml","meta-descent","meta-learning","meta-optimization","meta-rl","metalearning","pytorch"],"created_at":"2024-08-01T13:01:33.222Z","updated_at":"2025-04-09T02:16:20.499Z","avatar_url":"https://github.com/learnables.png","language":"Python","readme":"\u003cp align=\"center\"\u003e\u003cimg src=\"https://raw.githubusercontent.com/learnables/learn2learn/gh-pages/assets/img/l2l-full.png\" height=\"120px\" /\u003e\u003c/p\u003e\n\n--------------------------------------------------------------------------------\n\n![Test Status](https://github.com/learnables/learn2learn/workflows/Testing/badge.svg?branch=master)\n[![arXiv](https://img.shields.io/badge/arXiv-2008.12284-b31b1b.svg)](https://arxiv.org/abs/2008.12284)\n\nlearn2learn is a software library for meta-learning research.\n\nlearn2learn builds on top of PyTorch to accelerate two aspects of the meta-learning research cycle:\n\n* *fast prototyping*, essential in letting researchers quickly try new ideas, and\n* *correct reproducibility*, ensuring that these ideas are evaluated fairly.\n\nlearn2learn provides low-level utilities and unified interface to create new algorithms and domains, together with high-quality implementations of existing algorithms and standardized benchmarks.\nIt retains compatibility with [torchvision](https://pytorch.org/vision/), [torchaudio](https://pytorch.org/audio/), [torchtext](https://pytorch.org/text/), [cherry](http://cherry-rl.net/), and any other PyTorch-based library you might be using.\n\nTo learn more, see our whitepaper: [arXiv:2008.12284](https://arxiv.org/abs/2008.12284)\n\n**Overview**\n\n* [`learn2learn.data`](http://learn2learn.net/docs/learn2learn.data/): `Taskset` and transforms to create few-shot tasks from any PyTorch dataset.\n* [`learn2learn.vision`](http://learn2learn.net/docs/learn2learn.vision/): Models, datasets, and benchmarks for computer vision and few-shot learning.\n* [`learn2learn.gym`](http://learn2learn.net/docs/learn2learn.gym/): Environment and utilities for meta-reinforcement learning.\n* [`learn2learn.algorithms`](http://learn2learn.net/docs/learn2learn.algorithms/): High-level wrappers for existing meta-learning algorithms.\n* [`learn2learn.optim`](http://learn2learn.net/docs/learn2learn.optim/): Utilities and algorithms for differentiable optimization and meta-descent.\n\n**Resources**\n\n* Website: [http://learn2learn.net/](http://learn2learn.net/)\n* Documentation: [http://learn2learn.net/docs/learn2learn](http://learn2learn.net/docs/learn2learn)\n* Tutorials: [http://learn2learn.net/tutorials/getting_started/](http://learn2learn.net/tutorials/getting_started/)\n* Examples: [https://github.com/learnables/learn2learn/tree/master/examples](https://github.com/learnables/learn2learn/tree/master/examples)\n* GitHub: [https://github.com/learnables/learn2learn/](https://github.com/learnables/learn2learn/)\n* Slack: [http://slack.learn2learn.net/](http://slack.learn2learn.net/)\n\n## Installation\n\n~~~bash\npip install learn2learn\n~~~\n\n## Snippets \u0026 Examples\n\nThe following snippets provide a sneak peek at the functionalities of learn2learn.\n\n### High-level Wrappers\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eFew-Shot Learning with MAML\u003c/b\u003e\u003c/summary\u003e\n\nFor more algorithms (ProtoNets, ANIL, Meta-SGD, Reptile, Meta-Curvature, KFO) refer to the \u003ca href=\"https://github.com/learnables/learn2learn/tree/master/examples/vision\"\u003eexamples\u003c/a\u003e folder.\nMost of them can be implemented with with the `GBML` wrapper. (\u003ca href=\"http://learn2learn.net/docs/learn2learn.algorithms/#gbml\"\u003edocumentation\u003c/a\u003e).\n    \n~~~python\nmaml = l2l.algorithms.MAML(model, lr=0.1)\nopt = torch.optim.SGD(maml.parameters(), lr=0.001)\nfor iteration in range(10):\n    opt.zero_grad()\n    task_model = maml.clone()  # torch.clone() for nn.Modules\n    adaptation_loss = compute_loss(task_model)\n    task_model.adapt(adaptation_loss)  # computes gradient, update task_model in-place\n    evaluation_loss = compute_loss(task_model)\n    evaluation_loss.backward()  # gradients w.r.t. maml.parameters()\n    opt.step()\n~~~\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eMeta-Descent with Hypergradient\u003c/b\u003e\u003c/summary\u003e\n    \nLearn any kind of optimization algorithm with the `LearnableOptimizer`. (\u003ca href=\"https://github.com/learnables/learn2learn/tree/master/examples/optimization\"\u003eexample\u003c/a\u003e and \u003ca href=\"http://learn2learn.net/docs/learn2learn.optim/#learnableoptimizer\"\u003edocumentation\u003c/a\u003e)\n\n~~~python\nlinear = nn.Linear(784, 10)\ntransform = l2l.optim.ModuleTransform(l2l.nn.Scale)\nmetaopt = l2l.optim.LearnableOptimizer(linear, transform, lr=0.01)  # metaopt has .step()\nopt = torch.optim.SGD(metaopt.parameters(), lr=0.001)  # metaopt also has .parameters()\n\nmetaopt.zero_grad()\nopt.zero_grad()\nerror = loss(linear(X), y)\nerror.backward()\nopt.step()  # update metaopt\nmetaopt.step()  # update linear\n~~~\n\u003c/details\u003e\n\n### Learning Domains\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eCustom Few-Shot Dataset\u003c/b\u003e\u003c/summary\u003e\n\nMany standardized datasets (Omniglot, mini-/tiered-ImageNet, FC100, CIFAR-FS) are readily available in `learn2learn.vision.datasets`.\n(\u003ca href=\"http://learn2learn.net/docs/learn2learn.vision/#learn2learnvisiondatasets\"\u003edocumentation\u003c/a\u003e)\n\n~~~python\ndataset = l2l.data.MetaDataset(MyDataset())  # any PyTorch dataset\ntransforms = [  # Easy to define your own transform\n    l2l.data.transforms.NWays(dataset, n=5),\n    l2l.data.transforms.KShots(dataset, k=1),\n    l2l.data.transforms.LoadData(dataset),\n]\ntaskset = Taskset(dataset, transforms, num_tasks=20000)\nfor task in taskset:\n    X, y = task\n    # Meta-train on the task\n~~~\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eEnvironments and Utilities for Meta-RL\u003c/b\u003e\u003c/summary\u003e\n\nParallelize your own meta-environments with `AsyncVectorEnv`, or use the standardized ones.\n(\u003ca href=\"http://learn2learn.net/docs/learn2learn.gym/#metaenv\"\u003edocumentation\u003c/a\u003e)\n\n~~~python\ndef make_env():\n    env = l2l.gym.HalfCheetahForwardBackwardEnv()\n    env = cherry.envs.ActionSpaceScaler(env)\n    return env\n\nenv = l2l.gym.AsyncVectorEnv([make_env for _ in range(16)])  # uses 16 threads\nfor task_config in env.sample_tasks(20):\n    env.set_task(task)  # all threads receive the same task\n    state = env.reset()  # use standard Gym API\n    action = my_policy(env)\n    env.step(action)\n~~~\n\u003c/details\u003e\n\n### Low-Level Utilities\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eDifferentiable Optimization\u003c/b\u003e\u003c/summary\u003e\n\nLearn and differentiate through updates of PyTorch Modules.\n(\u003ca href=\"http://learn2learn.net/docs/learn2learn.optim/#parameterupdate\"\u003edocumentation\u003c/a\u003e)\n    \n~~~python\n\nmodel = MyModel()\ntransform = l2l.optim.KroneckerTransform(l2l.nn.KroneckerLinear)\nlearned_update = l2l.optim.ParameterUpdate(  # learnable update function\n        model.parameters(), transform)\nclone = l2l.clone_module(model)  # torch.clone() for nn.Modules\nerror = loss(clone(X), y)\nupdates = learned_update(  # similar API as torch.autograd.grad\n    error,\n    clone.parameters(),\n    create_graph=True,\n)\nl2l.update_module(clone, updates=updates)\nloss(clone(X), y).backward()  # Gradients w.r.t model.parameters() and learned_update.parameters()\n~~~\n\u003c/details\u003e\n\n## Changelog\n\nA human-readable changelog is available in the [CHANGELOG.md](CHANGELOG.md) file.\n\n## Citation\n\nTo cite the `learn2learn` repository in your academic publications, please use the following reference.\n\n\u003e Arnold, Sebastien M. R., Praateek Mahajan, Debajyoti Datta, Ian Bunner, and Konstantinos Saitas Zarkias. 2020. “learn2learn: A Library for Meta-Learning Research.” arXiv [cs.LG]. http://arxiv.org/abs/2008.12284.\n\nYou can also use the following Bibtex entry.\n\n~~~bib\n@article{Arnold2020-ss,\n  title         = \"learn2learn: A Library for {Meta-Learning} Research\",\n  author        = \"Arnold, S{\\'e}bastien M R and Mahajan, Praateek and Datta,\n                   Debajyoti and Bunner, Ian and Zarkias, Konstantinos Saitas\",\n  month         =  aug,\n  year          =  2020,\n  url           = \"http://arxiv.org/abs/2008.12284\",\n  archivePrefix = \"arXiv\",\n  primaryClass  = \"cs.LG\",\n  eprint        = \"2008.12284\"\n}\n\n~~~\n\n### Acknowledgements \u0026 Friends\n\n1. [TorchMeta](https://github.com/tristandeleu/pytorch-meta) is similar library, with a focus on datasets for supervised meta-learning. \n2. [higher](https://github.com/facebookresearch/higher) is a PyTorch library that enables differentiating through optimization inner-loops. While they monkey-patch `nn.Module` to be stateless, learn2learn retains the stateful PyTorch look-and-feel. For more information, refer to [their ArXiv paper](https://arxiv.org/abs/1910.01727).\n3. We are thankful to the following open-source implementations which helped guide the design of learn2learn:\n    * Tristan Deleu's [pytorch-maml-rl](https://github.com/tristandeleu/pytorch-maml-rl)\n    * Jonas Rothfuss' [ProMP](https://github.com/jonasrothfuss/ProMP/)\n    * Kwonjoon Lee's [MetaOptNet](https://github.com/kjunelee/MetaOptNet)\n    * Han-Jia Ye's and Hexiang Hu's [FEAT](https://github.com/Sha-Lab/FEAT)\n","funding_links":[],"categories":["Python","AutoML","Pytorch \u0026 related libraries｜Pytorch \u0026 相关库","Profiling","Table of Contents","Pytorch \u0026 related libraries","Scheduling","Tools and projects","其他_机器学习与深度学习","Libraries"],"sub_categories":["Profiling","Other libraries｜其他库:","Other libraries:","LLM","[Meta Reinforcement Learning]()"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Flearnables%2Flearn2learn","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Flearnables%2Flearn2learn","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Flearnables%2Flearn2learn/lists"}