{"id":13735267,"url":"https://github.com/pytorch-labs/torchfix","last_synced_at":"2025-04-11T23:19:34.821Z","repository":{"id":209552257,"uuid":"718880320","full_name":"pytorch-labs/torchfix","owner":"pytorch-labs","description":"TorchFix - a linter for PyTorch-using code with autofix support","archived":false,"fork":false,"pushed_at":"2025-02-07T20:59:36.000Z","size":228,"stargazers_count":138,"open_issues_count":19,"forks_count":20,"subscribers_count":9,"default_branch":"main","last_synced_at":"2025-04-11T23:19:30.942Z","etag":null,"topics":["flake8","flake8-plugin","hacktoberfest","linter","python","pytorch","static-analysis","static-code-analysis"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/pytorch-labs.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-11-15T01:21:07.000Z","updated_at":"2025-04-06T17:20:15.000Z","dependencies_parsed_at":"2023-11-28T01:29:37.109Z","dependency_job_id":"16cdb541-574b-44a3-9d0e-8fa244824026","html_url":"https://github.com/pytorch-labs/torchfix","commit_stats":null,"previous_names":["pytorch-labs/torchfix"],"tags_count":6,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pytorch-labs%2Ftorchfix","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pytorch-labs%2Ftorchfix/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pytorch-labs%2Ftorchfix/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pytorch-labs%2Ftorchfix/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/pytorch-labs","download_url":"https://codeload.github.com/pytorch-labs/torchfix/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248492886,"owners_count":21113163,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["flake8","flake8-plugin","hacktoberfest","linter","python","pytorch","static-analysis","static-code-analysis"],"created_at":"2024-08-03T03:01:04.876Z","updated_at":"2025-04-11T23:19:34.795Z","avatar_url":"https://github.com/pytorch-labs.png","language":"Python","readme":"# TorchFix - a linter for PyTorch-using code with autofix support\n\n[![PyPI](https://img.shields.io/pypi/v/torchfix.svg)](https://pypi.org/project/torchfix/)\n\nTorchFix is a Python code static analysis tool - a linter with autofix capabilities -\nfor users of PyTorch. It can be used to find and fix issues like usage of deprecated\nPyTorch functions and non-public symbols, and to adopt PyTorch best practices in general.\n\nTorchFix is built upon https://github.com/Instagram/LibCST - a library to manipulate\nPython concrete syntax trees. LibCST enables \"codemods\" (autofixes) in addition to\nreporting issues.\n\nTorchFix can be used as a Flake8 plugin (linting only) or as a standalone\nprogram (with autofix available for a subset of the lint violations).\n\n\u003e [!WARNING]\n\u003e Currently TorchFix is in a **beta version** stage, so there are still a lot of rough\nedges and many things can and will change.\n\n## Installation\n\nTo install the latest code from GitHub, clone/download\nhttps://github.com/pytorch-labs/torchfix and run `pip install .`\ninside the directory.\n\nTo install a release version from PyPI, run `pip install torchfix`.\n\n## Usage\n\nAfter the installation, TorchFix will be available as a Flake8 plugin, so running\nFlake8 normally will run the TorchFix linter.\n\nTo see only TorchFix warnings without the rest of the Flake8 linters, you can run\n`flake8 --isolated --select=TOR0,TOR1,TOR2`\n\nTorchFix can also be run as a standalone program: `torchfix .`\nAdd `--fix` parameter to try to autofix some of the issues (the files will be overwritten!)\nTo see some additional debug info, add `--show-stderr` parameter.\n\n\u003e [!CAUTION]\n\u003e Please keep in mind that autofix is a best-effort mechanism. Given the dynamic nature of Python,\nand especially the beta version status of TorchFix, it's very difficult to have\ncertainty when making changes to code, even for the seemingly trivial fixes.\n\nWarnings for issues with codes starting with TOR0, TOR1, and TOR2 are enabled by default.\nWarnings with other codes may be too noisy, so not enabled by default.\nTo enable them, use standard flake8 configuration options for the plugin mode or use\n`torchfix --select=ALL .` for the standalone mode.\n\n\n## Reporting problems\n\nIf you encounter a bug or some other problem with TorchFix, please file an issue on\nhttps://github.com/pytorch-labs/torchfix/issues.\n\n## Rule Code Assignment Policy\n\nNew rule codes are assigned incrementally across the following categories:\n\n* **TOR0XX, TOR1XX**: General-purpose `torch` functionality.\n* **TOR2XX**: Domain-specific rules, such as TorchVision.\n* **TOR4XX**: Noisy rules that are disabled by default.\n* **TOR9XX**: Internal rules specific for `pytorch/pytorch` repo, other users should not use these.\n\nTOR0, TOR1 and TOR2 are enabled by default.\n\n## Rules\n\n### TOR001 Use of removed function\n\n#### torch.solve\n\nThis function was deprecated since PyTorch version 1.9 and is now removed.\n\n`torch.solve` is deprecated in favor of `torch.linalg.solve`.\n`torch.linalg.solve` has its arguments reversed and does not return the LU factorization.\n\nTo get the LU factorization see `torch.lu`, which can be used with `torch.lu_solve` or `torch.lu_unpack`.\n\n`X = torch.solve(B, A).solution` should be replaced with `X = torch.linalg.solve(A, B)`.\n\n#### torch.symeig\n\nThis function was deprecated since PyTorch version 1.9 and is now removed.\n\n`torch.symeig` is deprecated in favor of `torch.linalg.eigh`.\n\nThe default behavior has changed from using the upper triangular portion of the matrix by default to using the lower triangular portion.\n\n```python\nL, _ = torch.symeig(A, upper=upper)\n```\n\nshould be replaced with\n\n```python\nL = torch.linalg.eigvalsh(A, UPLO='U' if upper else 'L')\n```\n\nand\n\n```python\nL, V = torch.symeig(A, eigenvectors=True)\n```\n\nshould be replaced with\n\n```python\nL, V = torch.linalg.eigh(A, UPLO='U' if upper else 'L')\n```\n\n### TOR002 Likely typo `require_grad` in assignment. Did you mean `requires_grad`?\n\nThis is a common misspelling that can lead to silent performance issues.\n\n### TOR003 Please pass `use_reentrant` explicitly to `checkpoint`\n\nThe default value of the `use_reentrant` parameter in `torch.utils.checkpoint` is being changed\nfrom `True` to `False`. In the meantime, the value needs to be passed explicitly.\n\nSee this [forum post](https://dev-discuss.pytorch.org/t/bc-breaking-update-to-torch-utils-checkpoint-not-passing-in-use-reentrant-flag-will-raise-an-error/1745)\nfor details.\n\n### TOR004 Import of removed function\n\nSee `TOR001`.\n\n### TOR101 Use of deprecated function\n\n#### torch.nn.utils.weight_norm\n\nThis function is deprecated. Use `torch.nn.utils.parametrizations.weight_norm`\nwhich uses the modern parametrization API. The new `weight_norm` is compatible\nwith `state_dict` generated from old `weight_norm`.\n\nMigration guide:\n\n* The magnitude (``weight_g``) and direction (``weight_v``) are now expressed\n    as ``parametrizations.weight.original0`` and ``parametrizations.weight.original1``\n    respectively.\n\n* To remove the weight normalization reparametrization, use\n    `torch.nn.utils.parametrize.remove_parametrizations`.\n\n* The weight is no longer recomputed once at module forward; instead, it will\n    be recomputed on every access.  To restore the old behavior, use\n    `torch.nn.utils.parametrize.cached` before invoking the module\n    in question.\n\n#### torch.backends.cuda.sdp_kernel\n\nThis function is deprecated. Use the `torch.nn.attention.sdpa_kernel` context manager instead.\n\nMigration guide:\nEach boolean input parameter (defaulting to true unless specified) of `sdp_kernel` corresponds to a `SDPBackened`. If the input parameter is true, the corresponding backend should be added to the input list of `sdpa_kernel`.\n\n#### torch.chain_matmul\n\nThis function is deprecated in favor of `torch.linalg.multi_dot`.\n\nMigration guide:\n`multi_dot` accepts a list of two or more tensors whereas `chain_matmul` accepted multiple tensors as input arguments. For migration, convert the multiple tensors in argument of  `chain_matmul` into a list of two or more tensors for `multi_dot`.\n\nExample: Replace `torch.chain_matmul(a, b, c)` with `torch.linalg.multi_dot([a, b, c])`.\n\n#### torch.cholesky\n\n`torch.cholesky()` is deprecated in favor of `torch.linalg.cholesky()`.\n\nMigration guide:\n* `L = torch.cholesky(A)` should be replaced with `L = torch.linalg.cholesky(A)`.\n* `L = torch.cholesky(A, upper=True)` should be replaced with `L = torch.linalg.cholesky(A).mH`\n\n#### torch.qr\n\n`torch.qr()` is deprecated in favor of `torch.linalg.qr()`.\n\nMigration guide:\n* The usage `Q, R = torch.qr(A)` should be replaced with `Q, R = torch.linalg.qr(A)`.\n* The boolean parameter `some` of `torch.qr` is replaced with a string parameter `mode` in `torch.linalg.qr`. The corresponding change in usage is from `Q, R = torch.qr(A, some=False)` to `Q, R = torch.linalg.qr(A, mode=\"complete\")`.\n\n#### torch.range\n\nThe function `torch.range()` is deprecated as its usage is incompatible with Python's builtin range. Instead, use `torch.arange()` as it produces values in `[start, end)`.\n\nMigration guide:\n* `torch.range(start, end)` produces values in the range of `[start, end]`. But `torch.arange(start, end)` produces values in `[start, end)`. For step size of 1, migrate usage from `torch.range(start, end, 1)` to `torch.arange(start, end+1, 1)`.\n\n### TOR102 `torch.load` without `weights_only` parameter is unsafe.\n\nExplicitly set `weights_only` to False only if you trust the data you load and full pickle functionality is needed, otherwise set `weights_only=True`.\n\n### TOR103 Import of deprecated function\n\nSee `TOR101`.\n\n## License\n\nTorchFix is BSD License licensed, as found in the LICENSE file.\n","funding_links":[],"categories":["Library-specific checks"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fpytorch-labs%2Ftorchfix","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fpytorch-labs%2Ftorchfix","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fpytorch-labs%2Ftorchfix/lists"}