{"id":19066131,"url":"https://github.com/epfml/pax","last_synced_at":"2025-04-28T12:25:04.858Z","repository":{"id":57463133,"uuid":"399116501","full_name":"epfml/pax","owner":"epfml","description":"JAX-like API for PyTorch","archived":false,"fork":false,"pushed_at":"2022-03-02T12:19:38.000Z","size":468,"stargazers_count":3,"open_issues_count":0,"forks_count":1,"subscribers_count":8,"default_branch":"main","last_synced_at":"2025-04-18T16:15:33.599Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/epfml.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2021-08-23T13:35:57.000Z","updated_at":"2023-08-16T14:54:09.000Z","dependencies_parsed_at":"2022-09-12T13:22:24.966Z","dependency_job_id":null,"html_url":"https://github.com/epfml/pax","commit_stats":null,"previous_names":[],"tags_count":5,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/epfml%2Fpax","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/epfml%2Fpax/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/epfml%2Fpax/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/epfml%2Fpax/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/epfml","download_url":"https://codeload.github.com/epfml/pax/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":251312263,"owners_count":21569201,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-09T00:54:33.022Z","updated_at":"2025-04-28T12:25:04.839Z","avatar_url":"https://github.com/epfml.png","language":"Python","readme":"\u003cp\u003e\u003cimg src=\"logo.svg\" alt=\"PAX Logo\" width=\"300\"\u003e\u003c/p\u003e\n\nSharing some of JAX's beautiful API with PyTorch users.\n\nDisambiguation: for wardrobes, see [link](https://www.ikea.com/ch/en/cat/pax-system-19086/). For peace, you are at the right spot.\n\n\n## Installation\n```bash\npip install paxlib\n```\n\nor \n```bash\npip install git+https://github.com/epfml/pax.git\n```\n\n## Pytrees in PyTorch\n\n```python\nimport torch\nimport pax\n\ntree = {\n    \"a\": [torch.tensor(3.0), torch.tensor(4.0)],\n    \"c\": 4\n}\npax.tree_map(lambda x: x*2, tree)\n```\n\nNote: we currently depend on `jax` for this functionality, but we could use [dm-tree](https://github.com/deepmind/tree) instead to drop the dependency.\n\n## Autodiff that looks like JAX\n\nWe follow the API of `jax.grad`: \n```python\nimport pax\n\nf = lambda x: x**2\ndf_dx = pax.grad(f)\ndf_dx(2.0)  # tensor(4.0)\n```\n\nThis works with any Pytree as input:\n\n```python\ndef f(x):\n    return x[\"a\"] * x[\"b\"]\n\nx = {\"a\": 2.0, \"b\": -1.5}\npax.value_and_grad(f)(x)  # (tensor(-3.), {'a': tensor(-1.5000), 'b': tensor(2.)})\n```\n\nPAX also supports higher-order derivatives:\n```python\nf = lambda x: 1/6 * x**3\npax.grad(f)(2.0)  # tensor(2.)\npax.grad(pax.grad(f))(3.0) # tensor(3.)\n```\n\n## Example: Minimal SGD\n\n```python\nimport torch\nimport pax\n\nf = lambda x: x**2\ndf_dx = pax.grad(f)\n\nx = torch.randn([])  # initialization\nfor step in range(20):\n    x = x - 0.1 * df_dx(x)\n    print(x, f(x))\n```\n\n## Example: meta-learning the learning rate\n\n```python\nf = lambda x: x**2\ndf_dx = pax.grad(f)\n\ndef sgd(x, lr=0.1, num_steps=10):\n    for _ in range(num_steps):\n        x = x - lr * df_dx(x)\n    return x\n\n# optimize the learning rate\ndef meta_loss(lr):\n    x0 = 1.0\n    return f(sgd(x0, lr=lr))\n\ndf_dlr = pax.grad(meta_loss)\n\nlr = 0.1\nfor _ in range(100):\n    lr = lr - 0.1 * df_dlr(lr)\n```\n\n## Converting from PyTorch\n\nWe provide a small wrapper for PyTorch _modules_ to make them behave like [Haiku](https://github.com/deepmind/dm-haiku).\n```python\nnet = torch.nn.Linear(10, 1)  # any torch.nn.Module\n\n# convert\nforward = pax.functional_module(net)\n\n# intialize\nparams, buffers = pax.get_params(net), pax.get_buffers(net)\n\n# run\ndata_batch = torch.zeros(2, 10)\nout, buffers = forward(params, data_batch, buffers=buffers, is_training=True)\n```\n\nand also a wrapper to make PyTorch _optimizers_ functional like [Optax](https://github.com/deepmind/optax):\n\n```python\noptimizer = pax.functional_optimizer(torch.optim.Adam, lr=1e-3)\n\nf = lambda x: x**2\ndf_dx = pax.grad(f)\nparams = torch.tensor(3.)\nopt_state = optimizer.init(params)\n\nfor step in range(10):\n    params, opt_state = optimizer.step(params, df_dx(params), opt_state)\n    print(params.item())\n```\n\nUsing PAX optimizers with learning rate schedulers looks like this:\n```python\noptimizer = pax.functional_optimizer(torch.optim.SGD, lr=0)\nlr_at_step = pax.functional_schedule(torch.optim.lr_scheduler.LambdaLR, lr_lambda=lambda step: 1/(step+1), initial_lr=0.1)\n\nf = lambda x: x**2\ndf_dx = pax.grad(f)\nparams = torch.tensor(3.)\nopt_state = optimizer.init(params)\n\nfor step in range(10):\n    params, opt_state = optimizer.step(params, df_dx(params), opt_state, lr=lr_at_step(step))\n    print(params.item())\n```\n\n## Runtime overhead\n\nWe measured the time required for one epoch on CIFAR-10 with a batch size of 128.\nWe compare a standard PyTorch implementation based on [this tutorial](https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html) to a PAX one, using `pax.value_and_grad`, `pax.functional_module` and `pax.functional_optimizer`. This is currently a little slower than regular PyTorch code. The peak memory usage could be larger too.\n\n\u003cimg src=\"benchmark.png\" alt=\"PAX Benchmark\" width=\"500\"\u003e\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fepfml%2Fpax","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fepfml%2Fpax","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fepfml%2Fpax/lists"}