{"id":16997240,"url":"https://github.com/activatedgeek/higher-distributed","last_synced_at":"2025-10-07T10:56:34.211Z","repository":{"id":168328727,"uuid":"643370684","full_name":"activatedgeek/higher-distributed","owner":"activatedgeek","description":"Higher-order gradients in PyTorch, Parallelized","archived":false,"fork":false,"pushed_at":"2023-05-22T16:12:42.000Z","size":8,"stargazers_count":0,"open_issues_count":0,"forks_count":0,"subscribers_count":3,"default_branch":"main","last_synced_at":"2025-01-27T07:27:32.983Z","etag":null,"topics":["distributed-pytorch","machine-learning","maml-algorithm","meta-learning","pytorch"],"latest_commit_sha":null,"homepage":"https://sanyamkapoor.com/kb/higher-order-gradients-in-pytorch-parallelized","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/activatedgeek.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-05-21T00:05:07.000Z","updated_at":"2023-05-27T20:57:21.000Z","dependencies_parsed_at":null,"dependency_job_id":"619d5ae6-7c58-4ce3-9719-16d38c021fac","html_url":"https://github.com/activatedgeek/higher-distributed","commit_stats":null,"previous_names":["activatedgeek/higher-distributed"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/activatedgeek%2Fhigher-distributed","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/activatedgeek%2Fhigher-distributed/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/activatedgeek%2Fhigher-distributed/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/activatedgeek%2Fhigher-distributed/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/activatedgeek","download_url":"https://codeload.github.com/activatedgeek/higher-distributed/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":244918709,"owners_count":20531686,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["distributed-pytorch","machine-learning","maml-algorithm","meta-learning","pytorch"],"created_at":"2024-10-14T03:54:16.611Z","updated_at":"2025-10-07T10:56:29.159Z","avatar_url":"https://github.com/activatedgeek.png","language":"Python","readme":"# Higher Distributed\n\nThis is the supporting code repository for the article [Higher-order gradients in PyTorch, Parallelized](https://sanyamkapoor.com/kb/higher-order-gradients-in-pytorch-parallelized) by Sanyam Kapoor and Ramakrishna Vedantam.\n\n## Setup\n\n(Optional) Setup a new Python environment via conda as:\n```shell\nconda env create -n \u003cname\u003e\n```\n\nInstall CUDA-compiled PyTorch version from [here](https://pytorch.org). The codebase\nhas been tested with PyTorch version `1.13` on CUDA 11.8.\n```shell\npip install 'torch\u003c2' torchvision --extra-index-url https://download.pytorch.org/whl/cu118\n```\n\nFinally, in the same target environment (e.g. the one setup above), run to setup all the dependencies.\n```shell\npip install -e .\n```\n\n## Run\n\nWe will use `CUDA_VISIBLE_DEVICES` environment variable to mask the number of GPUs available for use.\n\nFor instance, to use four GPUs:\n\n```\nCUDA_VISIBLE_DEVICES=0,1,2,3 accelerate launch --multi_gpu train_toy.py\n```\n\nThe default parameters should not need changing for the demo.\n\n**NOTE**: The device IDs may need to change as per hardware availability.\n\n## License\n\nMIT\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Factivatedgeek%2Fhigher-distributed","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Factivatedgeek%2Fhigher-distributed","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Factivatedgeek%2Fhigher-distributed/lists"}