{"id":33180286,"url":"https://github.com/TorchJD/torchjd","last_synced_at":"2025-11-16T09:00:39.469Z","repository":{"id":245585078,"uuid":"809057064","full_name":"TorchJD/torchjd","owner":"TorchJD","description":"Library for Jacobian descent with PyTorch. It enables the optimization of neural networks with multiple losses (e.g. multi-task learning).","archived":false,"fork":false,"pushed_at":"2025-11-08T16:31:51.000Z","size":1355,"stargazers_count":274,"open_issues_count":12,"forks_count":10,"subscribers_count":6,"default_branch":"main","last_synced_at":"2025-11-08T18:15:57.653Z","etag":null,"topics":["deep-learning","jacobian-descent","multi-objective-optimization","multi-task-learning","multiobjective-optimization","multitask-learning","optimization","python","pytorch","torch"],"latest_commit_sha":null,"homepage":"https://torchjd.org","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/TorchJD.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.md","funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2024-06-01T15:07:10.000Z","updated_at":"2025-11-03T04:38:47.000Z","dependencies_parsed_at":"2024-12-08T21:25:03.524Z","dependency_job_id":"d7ef21dc-f501-4ef1-b202-50a86d501e96","html_url":"https://github.com/TorchJD/torchjd","commit_stats":null,"previous_names":["torchjd/torchjd"],"tags_count":13,"template":false,"template_full_name":null,"purl":"pkg:github/TorchJD/torchjd","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/TorchJD%2Ftorchjd","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/TorchJD%2Ftorchjd/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/TorchJD%2Ftorchjd/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/TorchJD%2Ftorchjd/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/TorchJD","download_url":"https://codeload.github.com/TorchJD/torchjd/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/TorchJD%2Ftorchjd/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":284684461,"owners_count":27046675,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-11-16T02:00:05.974Z","response_time":65,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["deep-learning","jacobian-descent","multi-objective-optimization","multi-task-learning","multiobjective-optimization","multitask-learning","optimization","python","pytorch","torch"],"created_at":"2025-11-16T03:00:42.683Z","updated_at":"2025-11-16T09:00:39.457Z","avatar_url":"https://github.com/TorchJD.png","language":"Python","readme":"# ![image](docs/source/icons/favicon-32x32.png) TorchJD\n\n[![Doc](https://img.shields.io/badge/Doc-torchjd.org-blue?logo=data%3Aimage%2Fsvg%2Bxml%3Bbase64%2CPD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiIHN0YW5kYWxvbmU9Im5vIj8%2BCjwhLS0gQ3JlYXRlZCB1c2luZyBLcml0YTogaHR0cDovL2tyaXRhLm9yZyAtLT4KCjxzdmcKICAgd2lkdGg9IjIwNDcuNzJwdCIKICAgaGVpZ2h0PSIyMDQ3LjcycHQiCiAgIHZpZXdCb3g9IjAgMCAyMDQ3LjcyIDIwNDcuNzIiCiAgIHZlcnNpb249IjEuMSIKICAgaWQ9InN2ZzEiCiAgIHNvZGlwb2RpOmRvY25hbWU9IlRvcmNoSkRfbG9nb19jaXJjdWxhci5zdmciCiAgIGlua3NjYXBlOnZlcnNpb249IjEuMy4yICgwOTFlMjBlZjBmLCAyMDIzLTExLTI1KSIKICAgeG1sbnM6aW5rc2NhcGU9Imh0dHA6Ly93d3cuaW5rc2NhcGUub3JnL25hbWVzcGFjZXMvaW5rc2NhcGUiCiAgIHhtbG5zOnNvZGlwb2RpPSJodHRwOi8vc29kaXBvZGkuc291cmNlZm9yZ2UubmV0L0RURC9zb2RpcG9kaS0wLmR0ZCIKICAgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIgogICB4bWxuczpzdmc9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj4KICA8c29kaXBvZGk6bmFtZWR2aWV3CiAgICAgaWQ9Im5hbWVkdmlldzEiCiAgICAgcGFnZWNvbG9yPSIjZmZmZmZmIgogICAgIGJvcmRlcmNvbG9yPSIjNjY2NjY2IgogICAgIGJvcmRlcm9wYWNpdHk9IjEuMCIKICAgICBpbmtzY2FwZTpzaG93cGFnZXNoYWRvdz0iMiIKICAgICBpbmtzY2FwZTpwYWdlb3BhY2l0eT0iMC4wIgogICAgIGlua3NjYXBlOnBhZ2VjaGVja2VyYm9hcmQ9IjAiCiAgICAgaW5rc2NhcGU6ZGVza2NvbG9yPSIjZDFkMWQxIgogICAgIGlua3NjYXBlOmRvY3VtZW50LXVuaXRzPSJwdCIKICAgICBpbmtzY2FwZTp6b29tPSIwLjE2Mjk4NjE1IgogICAgIGlua3NjYXBlOmN4PSIxMzk1LjgyNDEiCiAgICAgaW5rc2NhcGU6Y3k9Ijg3NC4zMDczOSIKICAgICBpbmtzY2FwZTp3aW5kb3ctd2lkdGg9IjI1NjAiCiAgICAgaW5rc2NhcGU6d2luZG93LWhlaWdodD0iMTM3MSIKICAgICBpbmtzY2FwZTp3aW5kb3cteD0iMCIKICAgICBpbmtzY2FwZTp3aW5kb3cteT0iMCIKICAgICBpbmtzY2FwZTp3aW5kb3ctbWF4aW1pemVkPSIxIgogICAgIGlua3NjYXBlOmN1cnJlbnQtbGF5ZXI9InN2ZzEiIC8%2BCiAgPGRlZnMKICAgICBpZD0iZGVmczEiIC8%2BCiAgPHBhdGgKICAgICBpZD0ic2hhcGUxIgogICAgIGZpbGw9IiMwMDAwMDAiCiAgICAgZmlsbC1ydWxlPSJldmVub2RkIgogICAgIGQ9Ik0yNTUuMjE1IDg5OS44NzVMMjU1Ljk2NCAyNTUuOTY0TDc2Ny44OTMgMjU1Ljk2NEw3NjcuODkzIDBMMCAwTDAuMDMxMjUzMyA4OTguODQ0QzAuMDMxNzMwNSA4OTguODE0IDg0LjU3MjYgODk5Ljg3NSAyNTUuMjE1IDg5OS44NzVaIgogICAgIHN0eWxlPSJmaWxsOiMxYTgxZWI7ZmlsbC1vcGFjaXR5OjEiCiAgICAgdHJhbnNmb3JtPSJtYXRyaXgoMS4wMDAwMDAwMTQzMDcwNyAwIDAgMS4wMDAwMDAwMTQzMDcwNyAxMjcuOTgyMjI2NTIyMDU2IDEyNy45ODIyMjY1MjIwNTYpIiAvPgogIDxwYXRoCiAgICAgaWQ9InNoYXBlMDEiCiAgICAgdHJhbnNmb3JtPSJtYXRyaXgoLTEuMDAwMDAwMDA5MjIxODUgMCAwIC0xLjAwMDAwMDAwOTIyMTg1IDE5MTkuOTEzNjE3Mzk4NzEgMTkxMC4zMzcxOTY5MzEyNSkiCiAgICAgZmlsbD0iIzAwMDAwMCIKICAgICBmaWxsLXJ1bGU9ImV2ZW5vZGQiCiAgICAgZD0iTTc2OC4wNzQgMTc3Mi42MUMtMjgyLjAwNCAxNTk4LjY1IC0yMjkuNzEyIDE1MS44MjEgNzY4LjA3NCAwQzc2Ny4wODMgMjkuOTMzNyA3NjguMDk2IDE0Mi43NiA3NjguMDc0IDI2MC44ODZDNDEuNDc0NiA0NTYuOTAzIDEzNy40MjMgMTM4MC4wNiA3NjguMDc0IDE1MTMuNjQiCiAgICAgc3R5bGU9ImZpbGw6IzFhODFlYjtmaWxsLW9wYWNpdHk6MSIgLz4KICA8cGF0aAogICAgIGlkPSJzaGFwZTAyIgogICAgIGZpbGw9IiMwMDAwMDAiCiAgICAgZmlsbC1ydWxlPSJldmVub2RkIgogICAgIGQ9Ik03NjcuOTA5IDg4Ny4zMzhDMjYzLjQwMiA4MDMuOTI2IDAuMDc1OTQyMSAzODcuOTY0IDAgMC4wODU2NDk3QzE0LjY4NjggLTAuMDI4NTQ5OSA5OS4wNTUxIC0wLjAyODU0OTkgMjU1LjAxMSAwLjA4NTY0OTdDMjU1LjMxMSAyODEuMTE0IDQ0OC43ODYgNTYyLjE2MyA3NjcuOTA5IDYyNi40OTkiCiAgICAgc3R5bGU9ImZpbGw6IzFhODFlYjtmaWxsLW9wYWNpdHk6MSIKICAgICB0cmFuc2Zvcm09Im1hdHJpeCgwLjk5OTk5OTk2MDczODQ0IDAgMCAwLjk5OTk5OTk2MDczODQ0IDEyNy45NjY1OTE0OTQzMjggMTAyMy43NzIxNDc4MzE0KSIgLz4KICA8ZWxsaXBzZQogICAgIHN0eWxlPSJmaWxsOiMxYTgxZWI7c3Ryb2tlLXdpZHRoOjEuMDY3OTtmaWxsLW9wYWNpdHk6MSIKICAgICBpZD0icGF0aDEiCiAgICAgY3g9IjEwMjYuMzYxIgogICAgIGN5PSIxMDE0LjIyMTEiCiAgICAgcng9IjE4My4yNTU0MyIKICAgICByeT0iMTgzLjUxNTU4IiAvPgo8L3N2Zz4K)](https://torchjd.org)\n[![Tests](https://github.com/TorchJD/torchjd/actions/workflows/tests.yml/badge.svg)](https://github.com/TorchJD/torchjd/actions/workflows/tests.yml)\n[![codecov](https://codecov.io/gh/TorchJD/torchjd/graph/badge.svg?token=8AUCZE76QH)](https://codecov.io/gh/TorchJD/torchjd)\n[![pre-commit.ci status](https://results.pre-commit.ci/badge/github/TorchJD/torchjd/main.svg)](https://results.pre-commit.ci/latest/github/TorchJD/torchjd/main)\n[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/torchjd)](https://pypi.org/project/torchjd/)\n[![Static Badge](https://img.shields.io/badge/Discord%20-%20community%20-%20%235865F2?logo=discord\u0026logoColor=%23FFFFFF\u0026label=Discord)](https://discord.gg/76KkRnb3nk)\n\nTorchJD is a library extending autograd to enable\n[Jacobian descent](https://arxiv.org/pdf/2406.16232) with PyTorch. It can be used to train neural\nnetworks with multiple objectives. In particular, it supports multi-task learning, with a wide\nvariety of aggregators from the literature. It also enables the instance-wise risk minimization\nparadigm. The full documentation is available at [torchjd.org](https://torchjd.org), with several\nusage examples.\n\n## Jacobian descent (JD)\nJacobian descent is an extension of gradient descent supporting the optimization of vector-valued\nfunctions. This algorithm can be used to train neural networks with multiple loss functions. In this\ncontext, JD iteratively updates the parameters of the model using the Jacobian matrix of the vector\nof losses (the matrix stacking each individual loss' gradient). For more details, please refer to\nSection 2.1 of the [paper](https://arxiv.org/pdf/2406.16232).\n\n### How does this compare to averaging the different losses and using gradient descent?\n\nAveraging the losses and computing the gradient of the mean is mathematically equivalent to\ncomputing the Jacobian and averaging its rows. However, this approach has limitations. If two\ngradients are conflicting (they have a negative inner product), simply averaging them can result in\nan update vector that is conflicting with one of the two gradients. Averaging the losses and making\na step of gradient descent can thus lead to an increase of one of the losses.\n\nThis is illustrated in the following picture, in which the two objectives' gradients $g_1$ and $g_2$\nare conflicting, and averaging them gives an update direction that is detrimental to the first\nobjective. Note that in this picture, the dual cone, represented in green, is the set of vectors\nthat have a non-negative inner product with both $g_1$ and $g_2$.\n\n![image](docs/source/_static/direction_upgrad_mean.svg)\n\nWith Jacobian descent, $g_1$ and $g_2$ are computed individually and carefully aggregated using an\naggregator $\\mathcal A$. In this example, the aggregator is the Unconflicting Projection of\nGradients $\\mathcal A_{\\text{UPGrad}}$: it\nprojects each gradient onto the dual cone, and averages the projections. This ensures that the\nupdate will always be beneficial to each individual objective (given a sufficiently small step\nsize). In addition to $\\mathcal A_{\\text{UPGrad}}$, TorchJD supports\n[more than 10 aggregators from the literature](https://torchjd.org/stable/docs/aggregation).\n\n## Installation\n\u003c!-- start installation --\u003e\nTorchJD can be installed directly with pip:\n```bash\npip install torchjd\n```\n\u003c!-- end installation --\u003e\nSome aggregators may have additional dependencies. Please refer to the\n[installation documentation](https://torchjd.org/stable/installation) for them.\n\n## Usage\nThere are two main ways to use TorchJD. The first one is to replace the usual call to\n`loss.backward()` by a call to\n[`torchjd.autojac.backward`](https://torchjd.org/stable/docs/autojac/backward/) or\n[`torchjd.autojac.mtl_backward`](https://torchjd.org/stable/docs/autojac/mtl_backward/), depending\non the use-case. This will compute the Jacobian of the vector of losses with respect to the model\nparameters, and aggregate it with the specified\n[`Aggregator`](https://torchjd.org/stable/docs/aggregation/index.html#torchjd.aggregation.Aggregator).\nWhenever you want to optimize the vector of per-sample losses, you should rather use the\n[`torchjd.autogram.Engine`](https://torchjd.org/stable/docs/autogram/engine.html). Instead of\ncomputing the full Jacobian at once, it computes the Gramian of this Jacobian, layer by layer, in a\nmemory-efficient way. A vector of weights (one per element of the batch) can then be extracted from\nthis Gramian, using a\n[`Weighting`](https://torchjd.org/stable/docs/aggregation/index.html#torchjd.aggregation.Weighting),\nand used to combine the losses of the batch. Assuming each element of the batch is\nprocessed independently from the others, this approach is equivalent to\n[`torchjd.autojac.backward`](https://torchjd.org/stable/docs/autojac/backward/) while being\ngenerally much faster due to the lower memory usage. Note that we're still working on making\n`autogram` faster and more memory-efficient, and it's interface may change in future releases.\n\nThe following example shows how to use TorchJD to train a multi-task model with Jacobian descent,\nusing [UPGrad](https://torchjd.org/stable/docs/aggregation/upgrad/).\n\n```diff\n  import torch\n  from torch.nn import Linear, MSELoss, ReLU, Sequential\n  from torch.optim import SGD\n\n+ from torchjd.autojac import mtl_backward\n+ from torchjd.aggregation import UPGrad\n\n  shared_module = Sequential(Linear(10, 5), ReLU(), Linear(5, 3), ReLU())\n  task1_module = Linear(3, 1)\n  task2_module = Linear(3, 1)\n  params = [\n      *shared_module.parameters(),\n      *task1_module.parameters(),\n      *task2_module.parameters(),\n  ]\n\n  loss_fn = MSELoss()\n  optimizer = SGD(params, lr=0.1)\n+ aggregator = UPGrad()\n\n  inputs = torch.randn(8, 16, 10)  # 8 batches of 16 random input vectors of length 10\n  task1_targets = torch.randn(8, 16, 1)  # 8 batches of 16 targets for the first task\n  task2_targets = torch.randn(8, 16, 1)  # 8 batches of 16 targets for the second task\n\n  for input, target1, target2 in zip(inputs, task1_targets, task2_targets):\n      features = shared_module(input)\n      output1 = task1_module(features)\n      output2 = task2_module(features)\n      loss1 = loss_fn(output1, target1)\n      loss2 = loss_fn(output2, target2)\n\n      optimizer.zero_grad()\n-     loss = loss1 + loss2\n-     loss.backward()\n+     mtl_backward(losses=[loss1, loss2], features=features, aggregator=aggregator)\n      optimizer.step()\n```\n\n\u003e [!NOTE]\n\u003e In this example, the Jacobian is only with respect to the shared parameters. The task-specific\n\u003e parameters are simply updated via the gradient of their task’s loss with respect to them.\n\nThe following example shows how to use TorchJD to minimize the vector of per-instance losses with\nJacobian descent using [UPGrad](https://torchjd.org/stable/docs/aggregation/upgrad/).\n\n```diff\n  import torch\n  from torch.nn import Linear, MSELoss, ReLU, Sequential\n  from torch.optim import SGD\n\n+ from torchjd.autogram import Engine\n+ from torchjd.aggregation import UPGradWeighting\n\n  model = Sequential(Linear(10, 5), ReLU(), Linear(5, 3), ReLU(), Linear(3, 1), ReLU())\n\n- loss_fn = MSELoss()\n+ loss_fn = MSELoss(reduction=\"none\")\n  optimizer = SGD(model.parameters(), lr=0.1)\n\n+ weighting = UPGradWeighting()\n+ engine = Engine(model, batch_dim=0)\n\n  inputs = torch.randn(8, 16, 10)  # 8 batches of 16 random input vectors of length 10\n  targets = torch.randn(8, 16)  # 8 batches of 16 targets for the first task\n\n  for input, target in zip(inputs, targets):\n      output = model(input).squeeze(dim=1)  # shape [16]\n-     loss = loss_fn(output, target)  # shape [1]\n+     losses = loss_fn(output, target)  # shape [16]\n\n      optimizer.zero_grad()\n-     loss.backward()\n+     gramian = engine.compute_gramian(losses)  # shape: [16, 16]\n+     weights = weighting(gramian)  # shape: [16]\n+     losses.backward(weights)\n      optimizer.step()\n```\n\nLastly, you can even combine the two approaches by considering multiple tasks and each element of\nthe batch independently. We call that Instance-Wise Multitask Learning (IWMTL).\n\n```python\nimport torch\nfrom torch.nn import Linear, MSELoss, ReLU, Sequential\nfrom torch.optim import SGD\n\nfrom torchjd.aggregation import Flattening, UPGradWeighting\nfrom torchjd.autogram import Engine\n\nshared_module = Sequential(Linear(10, 5), ReLU(), Linear(5, 3), ReLU())\ntask1_module = Linear(3, 1)\ntask2_module = Linear(3, 1)\nparams = [\n    *shared_module.parameters(),\n    *task1_module.parameters(),\n    *task2_module.parameters(),\n]\n\noptimizer = SGD(params, lr=0.1)\nmse = MSELoss(reduction=\"none\")\nweighting = Flattening(UPGradWeighting())\nengine = Engine(shared_module, batch_dim=0)\n\ninputs = torch.randn(8, 16, 10)  # 8 batches of 16 random input vectors of length 10\ntask1_targets = torch.randn(8, 16)  # 8 batches of 16 targets for the first task\ntask2_targets = torch.randn(8, 16)  # 8 batches of 16 targets for the second task\n\nfor input, target1, target2 in zip(inputs, task1_targets, task2_targets):\n    features = shared_module(input)  # shape: [16, 3]\n    out1 = task1_module(features).squeeze(1)  # shape: [16]\n    out2 = task2_module(features).squeeze(1)  # shape: [16]\n\n    # Compute the matrix of losses: one loss per element of the batch and per task\n    losses = torch.stack([mse(out1, target1), mse(out2, target2)], dim=1)  # shape: [16, 2]\n\n    # Compute the gramian (inner products between pairs of gradients of the losses)\n    gramian = engine.compute_gramian(losses)  # shape: [16, 2, 2, 16]\n\n    # Obtain the weights that lead to no conflict between reweighted gradients\n    weights = weighting(gramian)  # shape: [16, 2]\n\n    optimizer.zero_grad()\n    # Do the standard backward pass, but weighted using the obtained weights\n    losses.backward(weights)\n    optimizer.step()\n```\n\n\u003e [!NOTE]\n\u003e Here,  because the losses are a matrix instead of a simple vector, we compute a *generalized\n\u003e Gramian* and we extract weights from it using a\n\u003e [GeneralizedWeighting](https://torchjd.org/docs/aggregation/index.html#torchjd.aggregation.GeneralizedWeighting).\n\nMore usage examples can be found [here](https://torchjd.org/stable/examples/).\n\n## Supported Aggregators and Weightings\nTorchJD provides many existing aggregators from the literature, listed in the following table.\n\n\u003c!-- recommended aggregators first, then alphabetical order --\u003e\n| Aggregator                                                                                                 | Weighting                                                                                                              | Publication                                                                                                                                                          |\n|------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [UPGrad](https://torchjd.org/stable/docs/aggregation/upgrad.html#torchjd.aggregation.UPGrad) (recommended) | [UPGradWeighting](https://torchjd.org/stable/docs/aggregation/upgrad#torchjd.aggregation.UPGradWeighting)              | [Jacobian Descent For Multi-Objective Optimization](https://arxiv.org/pdf/2406.16232)                                                                                |\n| [AlignedMTL](https://torchjd.org/stable/docs/aggregation/aligned_mtl#torchjd.aggregation.AlignedMTL)       | [AlignedMTLWeighting](https://torchjd.org/stable/docs/aggregation/aligned_mtl#torchjd.aggregation.AlignedMTLWeighting) | [Independent Component Alignment for Multi-Task Learning](https://arxiv.org/pdf/2305.19000)                                                                          |\n| [CAGrad](https://torchjd.org/stable/docs/aggregation/cagrad#torchjd.aggregation.CAGrad)                    | [CAGradWeighting](https://torchjd.org/stable/docs/aggregation/cagrad#torchjd.aggregation.CAGradWeighting)              | [Conflict-Averse Gradient Descent for Multi-task Learning](https://arxiv.org/pdf/2110.14048)                                                                         |\n| [ConFIG](https://torchjd.org/stable/docs/aggregation/config#torchjd.aggregation.ConFIG)                    | -                                                                                                                      | [ConFIG: Towards Conflict-free Training of Physics Informed Neural Networks](https://arxiv.org/pdf/2408.11104)                                                       |\n| [Constant](https://torchjd.org/stable/docs/aggregation/constant#torchjd.aggregation.Constant)              | [ConstantWeighting](https://torchjd.org/stable/docs/aggregation/constant#torchjd.aggregation.ConstantWeighting)        | -                                                                                                                                                                    |\n| [DualProj](https://torchjd.org/stable/docs/aggregation/dualproj#torchjd.aggregation.DualProj)              | [DualProjWeighting](https://torchjd.org/stable/docs/aggregation/dualproj#torchjd.aggregation.DualProjWeighting)        | [Gradient Episodic Memory for Continual Learning](https://arxiv.org/pdf/1706.08840)                                                                                  |\n| [GradDrop](https://torchjd.org/stable/docs/aggregation/graddrop#torchjd.aggregation.GradDrop)              | -                                                                                                                      | [Just Pick a Sign: Optimizing Deep Multitask Models with Gradient Sign Dropout](https://arxiv.org/pdf/2010.06808)                                                    |\n| [IMTLG](https://torchjd.org/stable/docs/aggregation/imtl_g#torchjd.aggregation.IMTLG)                      | [IMTLGWeighting](https://torchjd.org/stable/docs/aggregation/imtl_g#torchjd.aggregation.IMTLGWeighting)                | [Towards Impartial Multi-task Learning](https://discovery.ucl.ac.uk/id/eprint/10120667/)                                                                             |\n| [Krum](https://torchjd.org/stable/docs/aggregation/krum#torchjd.aggregation.Krum)                          | [KrumWeighting](https://torchjd.org/stable/docs/aggregation/krum#torchjd.aggregation.KrumWeighting)                    | [Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent](https://proceedings.neurips.cc/paper/2017/file/f4b9ec30ad9f68f89b29639786cb62ef-Paper.pdf)  |\n| [Mean](https://torchjd.org/stable/docs/aggregation/mean#torchjd.aggregation.Mean)                          | [MeanWeighting](https://torchjd.org/stable/docs/aggregation/mean#torchjd.aggregation.MeanWeighting)                    | -                                                                                                                                                                    |\n| [MGDA](https://torchjd.org/stable/docs/aggregation/mgda#torchjd.aggregation.MGDA)                          | [MGDAWeighting](https://torchjd.org/stable/docs/aggregation/mgda#torchjd.aggregation.MGDAWeighting)                    | [Multiple-gradient descent algorithm (MGDA) for multiobjective optimization](https://www.sciencedirect.com/science/article/pii/S1631073X12000738)                    |\n| [NashMTL](https://torchjd.org/stable/docs/aggregation/nash_mtl#torchjd.aggregation.NashMTL)                | -                                                                                                                      | [Multi-Task Learning as a Bargaining Game](https://arxiv.org/pdf/2202.01017)                                                                                         |\n| [PCGrad](https://torchjd.org/stable/docs/aggregation/pcgrad#torchjd.aggregation.PCGrad)                    | [PCGradWeighting](https://torchjd.org/stable/docs/aggregation/pcgrad#torchjd.aggregation.PCGradWeighting)              | [Gradient Surgery for Multi-Task Learning](https://arxiv.org/pdf/2001.06782)                                                                                         |\n| [Random](https://torchjd.org/stable/docs/aggregation/random#torchjd.aggregation.Random)                    | [RandomWeighting](https://torchjd.org/stable/docs/aggregation/random#torchjd.aggregation.RandomWeighting)              | [Reasonable Effectiveness of Random Weighting: A Litmus Test for Multi-Task Learning](https://arxiv.org/pdf/2111.10603)                                              |\n| [Sum](https://torchjd.org/stable/docs/aggregation/sum#torchjd.aggregation.Sum)                             | [SumWeighting](https://torchjd.org/stable/docs/aggregation/sum#torchjd.aggregation.SumWeighting)                       | -                                                                                                                                                                    |\n| [Trimmed Mean](https://torchjd.org/stable/docs/aggregation/trimmed_mean#torchjd.aggregation.TrimmedMean)   | -                                                                                                                      | [Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates](https://proceedings.mlr.press/v80/yin18a/yin18a.pdf)                                      |\n\n## Contribution\nPlease read the [Contribution page](CONTRIBUTING.md).\n\n## Citation\nIf you use TorchJD for your research, please cite:\n```\n@article{jacobian_descent,\n  title={Jacobian Descent For Multi-Objective Optimization},\n  author={Quinton, Pierre and Rey, Valérian},\n  journal={arXiv preprint arXiv:2406.16232},\n  year={2024}\n}\n```\n","funding_links":[],"categories":["Codebase","Benchmarks \u0026 Code"],"sub_categories":["Recommendation","Image Classification"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FTorchJD%2Ftorchjd","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FTorchJD%2Ftorchjd","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FTorchJD%2Ftorchjd/lists"}