{"id":13487075,"url":"https://github.com/catalyst-team/catalyst","last_synced_at":"2025-05-13T23:03:57.808Z","repository":{"id":37664502,"uuid":"145385156","full_name":"catalyst-team/catalyst","owner":"catalyst-team","description":"Accelerated deep learning R\u0026D","archived":false,"fork":false,"pushed_at":"2024-03-20T16:17:12.000Z","size":55187,"stargazers_count":3350,"open_issues_count":3,"forks_count":393,"subscribers_count":44,"default_branch":"master","last_synced_at":"2025-05-08T04:16:06.657Z","etag":null,"topics":["computer-vision","deep-learning","distributed-computing","image-classification","image-processing","image-segmentation","information-retrieval","infrastructure","machine-learning","metric-learning","natural-language-processing","object-detection","python","pytorch","recommender-system","reinforcement-learning","reproducibility","research","text-classification","text-segmentation"],"latest_commit_sha":null,"homepage":"https://catalyst-team.com","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/catalyst-team.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.md","funding":".github/FUNDING.yml","license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":"CITATION","codeowners":".github/CODEOWNERS","security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null},"funding":{"github":null,"patreon":"catalyst_team","open_collective":"catalyst","ko_fi":null,"tidelift":null,"community_bridge":null,"liberapay":null,"issuehunt":null,"otechie":null,"custom":null}},"created_at":"2018-08-20T07:56:13.000Z","updated_at":"2025-05-08T03:48:17.000Z","dependencies_parsed_at":"2024-04-19T07:16:19.425Z","dependency_job_id":null,"html_url":"https://github.com/catalyst-team/catalyst","commit_stats":{"total_commits":1474,"total_committers":114,"mean_commits":"12.929824561403509","dds":0.5352781546811398,"last_synced_commit":"e99f90655d0efcf22559a46e928f0f98c9807ebf"},"previous_names":[],"tags_count":109,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/catalyst-team%2Fcatalyst","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/catalyst-team%2Fcatalyst/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/catalyst-team%2Fcatalyst/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/catalyst-team%2Fcatalyst/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/catalyst-team","download_url":"https://codeload.github.com/catalyst-team/catalyst/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253879782,"owners_count":21978031,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["computer-vision","deep-learning","distributed-computing","image-classification","image-processing","image-segmentation","information-retrieval","infrastructure","machine-learning","metric-learning","natural-language-processing","object-detection","python","pytorch","recommender-system","reinforcement-learning","reproducibility","research","text-classification","text-segmentation"],"created_at":"2024-07-31T18:00:55.109Z","updated_at":"2025-05-13T23:03:57.724Z","avatar_url":"https://github.com/catalyst-team.png","language":"Python","readme":"\u003cdiv align=\"center\"\u003e\n\n[![Catalyst logo](https://raw.githubusercontent.com/catalyst-team/catalyst-pics/master/pics/catalyst_logo.png)](https://github.com/catalyst-team/catalyst)\n\n**Accelerated Deep Learning R\u0026D**\n\n[![CodeFactor](https://www.codefactor.io/repository/github/catalyst-team/catalyst/badge)](https://www.codefactor.io/repository/github/catalyst-team/catalyst)\n[![Pipi version](https://img.shields.io/pypi/v/catalyst.svg)](https://pypi.org/project/catalyst/)\n[![Docs](https://img.shields.io/badge/dynamic/json.svg?label=docs\u0026url=https%3A%2F%2Fpypi.org%2Fpypi%2Fcatalyst%2Fjson\u0026query=%24.info.version\u0026colorB=brightgreen\u0026prefix=v)](https://catalyst-team.github.io/catalyst/index.html)\n[![Docker](https://img.shields.io/badge/docker-hub-blue)](https://hub.docker.com/r/catalystteam/catalyst/tags)\n[![PyPI Status](https://pepy.tech/badge/catalyst)](https://pepy.tech/project/catalyst)\n\n[![Twitter](https://img.shields.io/badge/news-twitter-499feb)](https://twitter.com/CatalystTeam)\n[![Telegram](https://img.shields.io/badge/channel-telegram-blue)](https://t.me/catalyst_team)\n[![Slack](https://img.shields.io/badge/Catalyst-slack-success)](https://join.slack.com/t/catalyst-team-devs/shared_invite/zt-d9miirnn-z86oKDzFMKlMG4fgFdZafw)\n[![Github contributors](https://img.shields.io/github/contributors/catalyst-team/catalyst.svg?logo=github\u0026logoColor=white)](https://github.com/catalyst-team/catalyst/graphs/contributors)\n\n![codestyle](https://github.com/catalyst-team/catalyst/workflows/codestyle/badge.svg?branch=master\u0026event=push)\n![docs](https://github.com/catalyst-team/catalyst/workflows/docs/badge.svg?branch=master\u0026event=push)\n![catalyst](https://github.com/catalyst-team/catalyst/workflows/catalyst/badge.svg?branch=master\u0026event=push)\n![integrations](https://github.com/catalyst-team/catalyst/workflows/integrations/badge.svg?branch=master\u0026event=push)\n\n[![python](https://img.shields.io/badge/python_3.6-passing-success)](https://github.com/catalyst-team/catalyst/workflows/catalyst/badge.svg?branch=master\u0026event=push)\n[![python](https://img.shields.io/badge/python_3.7-passing-success)](https://github.com/catalyst-team/catalyst/workflows/catalyst/badge.svg?branch=master\u0026event=push)\n[![python](https://img.shields.io/badge/python_3.8-passing-success)](https://github.com/catalyst-team/catalyst/workflows/catalyst/badge.svg?branch=master\u0026event=push)\n\n[![os](https://img.shields.io/badge/Linux-passing-success)](https://github.com/catalyst-team/catalyst/workflows/catalyst/badge.svg?branch=master\u0026event=push)\n[![os](https://img.shields.io/badge/OSX-passing-success)](https://github.com/catalyst-team/catalyst/workflows/catalyst/badge.svg?branch=master\u0026event=push)\n[![os](https://img.shields.io/badge/WSL-passing-success)](https://github.com/catalyst-team/catalyst/workflows/catalyst/badge.svg?branch=master\u0026event=push)\n\u003c/div\u003e\n\nCatalyst is a PyTorch framework for Deep Learning Research and Development.\nIt focuses on reproducibility, rapid experimentation, and codebase reuse\nso you can create something new rather than write yet another train loop.\n\u003cbr/\u003e Break the cycle – use the Catalyst!\n\n- [Project Manifest](https://github.com/catalyst-team/catalyst/blob/master/MANIFEST.md)\n- [Framework architecture](https://miro.com/app/board/o9J_lxBO-2k=/)\n- [Catalyst at AI Landscape](https://landscape.lfai.foundation/selected=catalyst)\n- Part of the [PyTorch Ecosystem](https://pytorch.org/ecosystem/)\n\n\u003cdetails\u003e\n\u003csummary\u003eCatalyst at PyTorch Ecosystem Day 2021\u003c/summary\u003e\n\u003cp\u003e\n\n[![Catalyst poster](https://raw.githubusercontent.com/catalyst-team/catalyst-pics/master/pics/Catalyst-PTED21.png)](https://github.com/catalyst-team/catalyst)\n\n\u003c/p\u003e\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eCatalyst at PyTorch Developer Day 2021\u003c/summary\u003e\n\u003cp\u003e\n\n[![Catalyst poster](https://raw.githubusercontent.com/catalyst-team/catalyst-pics/master/pics/Catalyst-PTDD21.png)](https://github.com/catalyst-team/catalyst)\n\n\u003c/p\u003e\n\u003c/details\u003e\n\n----\n\n## Getting started\n\n```bash\npip install -U catalyst\n```\n\n```python\nimport os\nfrom torch import nn, optim\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl, utils\nfrom catalyst.contrib.datasets import MNIST\n\nmodel = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10))\ncriterion = nn.CrossEntropyLoss()\noptimizer = optim.Adam(model.parameters(), lr=0.02)\nloaders = {\n    \"train\": DataLoader(MNIST(os.getcwd(), train=True), batch_size=32),\n    \"valid\": DataLoader(MNIST(os.getcwd(), train=False), batch_size=32),\n}\n\nrunner = dl.SupervisedRunner(\n    input_key=\"features\", output_key=\"logits\", target_key=\"targets\", loss_key=\"loss\"\n)\n\n# model training\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    loaders=loaders,\n    num_epochs=1,\n    callbacks=[\n        dl.AccuracyCallback(input_key=\"logits\", target_key=\"targets\", topk=(1, 3, 5)),\n        dl.PrecisionRecallF1SupportCallback(input_key=\"logits\", target_key=\"targets\"),\n    ],\n    logdir=\"./logs\",\n    valid_loader=\"valid\",\n    valid_metric=\"loss\",\n    minimize_valid_metric=True,\n    verbose=True,\n)\n\n# model evaluation\nmetrics = runner.evaluate_loader(\n    loader=loaders[\"valid\"],\n    callbacks=[dl.AccuracyCallback(input_key=\"logits\", target_key=\"targets\", topk=(1, 3, 5))],\n)\n\n# model inference\nfor prediction in runner.predict_loader(loader=loaders[\"valid\"]):\n    assert prediction[\"logits\"].detach().cpu().numpy().shape[-1] == 10\n\n# model post-processing\nmodel = runner.model.cpu()\nbatch = next(iter(loaders[\"valid\"]))[0]\nutils.trace_model(model=model, batch=batch)\nutils.quantize_model(model=model)\nutils.prune_model(model=model, pruning_fn=\"l1_unstructured\", amount=0.8)\nutils.onnx_export(model=model, batch=batch, file=\"./logs/mnist.onnx\", verbose=True)\n```\n\n### Step-by-step Guide\n1. Start with [Catalyst — A PyTorch Framework for Accelerated Deep Learning R\u0026D](https://medium.com/pytorch/catalyst-a-pytorch-framework-for-accelerated-deep-learning-r-d-ad9621e4ca88?source=friends_link\u0026sk=885b4409aecab505db0a63b06f19dcef) introduction.\n1. Try [notebook tutorials](#minimal-examples) or check [minimal examples](#minimal-examples) for first deep dive.\n1. Read [blog posts](https://catalyst-team.com/post/) with use-cases and guides.\n1. Learn machine learning with our [\"Deep Learning with Catalyst\" course](https://catalyst-team.com/#course).\n1. And finally, [join our slack](https://join.slack.com/t/catalyst-team-core/shared_invite/zt-d9miirnn-z86oKDzFMKlMG4fgFdZafw) if you want to chat with the team and contributors.\n\n\n## Table of Contents\n- [Getting started](#getting-started)\n  - [Step-by-step Guide](#step-by-step-guide)\n- [Table of Contents](#table-of-contents)\n- [Overview](#overview)\n  - [Installation](#installation)\n  - [Documentation](#documentation)\n  - [Minimal Examples](#minimal-examples)\n  - [Tests](#tests)\n  - [Blog Posts](#blog-posts)\n  - [Talks](#talks)\n- [Community](#community)\n  - [Contribution Guide](#contribution-guide)\n  - [User Feedback](#user-feedback)\n  - [Acknowledgments](#acknowledgments)\n  - [Trusted by](#trusted-by)\n  - [Citation](#citation)\n\n\n## Overview\nCatalyst helps you implement compact\nbut full-featured Deep Learning pipelines with just a few lines of code.\nYou get a training loop with metrics, early-stopping, model checkpointing,\nand other features without the boilerplate.\n\n\n### Installation\n\nGeneric installation:\n```bash\npip install -U catalyst\n```\n\n\u003cdetails\u003e\n\u003csummary\u003eSpecialized versions, extra requirements might apply\u003c/summary\u003e\n\u003cp\u003e\n\n```bash\npip install catalyst[ml]         # installs ML-based Catalyst\npip install catalyst[cv]         # installs CV-based Catalyst\n# master version installation\npip install git+https://github.com/catalyst-team/catalyst@master --upgrade\n# all available extensions are listed here:\n# https://github.com/catalyst-team/catalyst/blob/master/setup.py\n```\n\u003c/p\u003e\n\u003c/details\u003e\n\nCatalyst is compatible with: Python 3.7+. PyTorch 1.4+. \u003cbr/\u003e\nTested on Ubuntu 16.04/18.04/20.04, macOS 10.15, Windows 10, and Windows Subsystem for Linux.\n\n### Documentation\n- [master](https://catalyst-team.github.io/catalyst/)\n- [22.02](https://catalyst-team.github.io/catalyst/v22.02/index.html)\n\n- \u003cdetails\u003e\n  \u003csummary\u003e2021 edition\u003c/summary\u003e\n  \u003cp\u003e\n\n    - [21.12](https://catalyst-team.github.io/catalyst/v21.12/index.html)\n    - [21.11](https://catalyst-team.github.io/catalyst/v21.11/index.html)\n    - [21.10](https://catalyst-team.github.io/catalyst/v21.10/index.html)\n    - [21.09](https://catalyst-team.github.io/catalyst/v21.09/index.html)\n    - [21.08](https://catalyst-team.github.io/catalyst/v21.08/index.html)\n    - [21.07](https://catalyst-team.github.io/catalyst/v21.07/index.html)\n    - [21.06](https://catalyst-team.github.io/catalyst/v21.06/index.html)\n    - [21.05](https://catalyst-team.github.io/catalyst/v21.05/index.html) ([Catalyst — A PyTorch Framework for Accelerated Deep Learning R\u0026D](https://medium.com/pytorch/catalyst-a-pytorch-framework-for-accelerated-deep-learning-r-d-ad9621e4ca88?source=friends_link\u0026sk=885b4409aecab505db0a63b06f19dcef))\n    - [21.04/21.04.1](https://catalyst-team.github.io/catalyst/v21.04/index.html), [21.04.2](https://catalyst-team.github.io/catalyst/v21.04.2/index.html)\n    - [21.03](https://catalyst-team.github.io/catalyst/v21.03/index.html), [21.03.1/21.03.2](https://catalyst-team.github.io/catalyst/v21.03.1/index.html)\n\n  \u003c/p\u003e\n  \u003c/details\u003e\n- \u003cdetails\u003e\n  \u003csummary\u003e2020 edition\u003c/summary\u003e\n  \u003cp\u003e\n\n    - [20.12](https://catalyst-team.github.io/catalyst/v20.12/index.html)\n    - [20.11](https://catalyst-team.github.io/catalyst/v20.11/index.html)\n    - [20.10](https://catalyst-team.github.io/catalyst/v20.10/index.html)\n    - [20.09](https://catalyst-team.github.io/catalyst/v20.09/index.html)\n    - [20.08.2](https://catalyst-team.github.io/catalyst/v20.08.2/index.html)\n    - [20.07](https://catalyst-team.github.io/catalyst/v20.07/index.html) ([dev blog: 20.07 release](https://medium.com/pytorch/catalyst-dev-blog-20-07-release-fb489cd23e14?source=friends_link\u0026sk=7ab92169658fe9a9e1c44068f28cc36c))\n    - [20.06](https://catalyst-team.github.io/catalyst/v20.06/index.html)\n    - [20.05](https://catalyst-team.github.io/catalyst/v20.05/index.html), [20.05.1](https://catalyst-team.github.io/catalyst/v20.05.1/index.html)\n    - [20.04](https://catalyst-team.github.io/catalyst/v20.04/index.html), [20.04.1](https://catalyst-team.github.io/catalyst/v20.04.1/index.html), [20.04.2](https://catalyst-team.github.io/catalyst/v20.04.2/index.html)\n\n  \u003c/p\u003e\n  \u003c/details\u003e\n\n\n### Minimal Examples\n\n- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/catalyst-team/catalyst/blob/master/examples/notebooks/customizing_what_happens_in_train.ipynb) Introduction tutorial \"[Customizing what happens in `train`](./examples/notebooks/customizing_what_happens_in_train.ipynb)\"\n- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/catalyst-team/catalyst/blob/master/examples/notebooks/customization_tutorial.ipynb) Demo with [customization examples](./examples/notebooks/customization_tutorial.ipynb)\n- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/catalyst-team/catalyst/blob/master/examples/notebooks/reinforcement_learning.ipynb) [Reinforcement Learning with Catalyst](./examples/notebooks/reinforcement_learning.ipynb)\n- [And more](./examples/)\n\n\u003cdetails\u003e\n\u003csummary\u003eCustomRunner – PyTorch for-loop decomposition\u003c/summary\u003e\n\u003cp\u003e\n\n```python\nimport os\nfrom torch import nn, optim\nfrom torch.nn import functional as F\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl, metrics\nfrom catalyst.contrib.datasets import MNIST\n\nmodel = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10))\noptimizer = optim.Adam(model.parameters(), lr=0.02)\n\ntrain_data = MNIST(os.getcwd(), train=True)\nvalid_data = MNIST(os.getcwd(), train=False)\nloaders = {\n    \"train\": DataLoader(train_data, batch_size=32),\n    \"valid\": DataLoader(valid_data, batch_size=32),\n}\n\nclass CustomRunner(dl.Runner):\n    def predict_batch(self, batch):\n        # model inference step\n        return self.model(batch[0].to(self.engine.device))\n\n    def on_loader_start(self, runner):\n        super().on_loader_start(runner)\n        self.meters = {\n            key: metrics.AdditiveMetric(compute_on_call=False)\n            for key in [\"loss\", \"accuracy01\", \"accuracy03\"]\n        }\n\n    def handle_batch(self, batch):\n        # model train/valid step\n        # unpack the batch\n        x, y = batch\n        # run model forward pass\n        logits = self.model(x)\n        # compute the loss\n        loss = F.cross_entropy(logits, y)\n        # compute the metrics\n        accuracy01, accuracy03 = metrics.accuracy(logits, y, topk=(1, 3))\n        # log metrics\n        self.batch_metrics.update(\n            {\"loss\": loss, \"accuracy01\": accuracy01, \"accuracy03\": accuracy03}\n        )\n        for key in [\"loss\", \"accuracy01\", \"accuracy03\"]:\n            self.meters[key].update(self.batch_metrics[key].item(), self.batch_size)\n        # run model backward pass\n        if self.is_train_loader:\n            self.engine.backward(loss)\n            self.optimizer.step()\n            self.optimizer.zero_grad()\n\n    def on_loader_end(self, runner):\n        for key in [\"loss\", \"accuracy01\", \"accuracy03\"]:\n            self.loader_metrics[key] = self.meters[key].compute()[0]\n        super().on_loader_end(runner)\n\nrunner = CustomRunner()\n# model training\nrunner.train(\n    model=model,\n    optimizer=optimizer,\n    loaders=loaders,\n    logdir=\"./logs\",\n    num_epochs=5,\n    verbose=True,\n    valid_loader=\"valid\",\n    valid_metric=\"loss\",\n    minimize_valid_metric=True,\n)\n# model inference\nfor logits in runner.predict_loader(loader=loaders[\"valid\"]):\n    assert logits.detach().cpu().numpy().shape[-1] == 10\n```\n\u003c/p\u003e\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eML - linear regression\u003c/summary\u003e\n\u003cp\u003e\n\n```python\nimport torch\nfrom torch.utils.data import DataLoader, TensorDataset\nfrom catalyst import dl\n\n# data\nnum_samples, num_features = int(1e4), int(1e1)\nX, y = torch.rand(num_samples, num_features), torch.rand(num_samples)\ndataset = TensorDataset(X, y)\nloader = DataLoader(dataset, batch_size=32, num_workers=1)\nloaders = {\"train\": loader, \"valid\": loader}\n\n# model, criterion, optimizer, scheduler\nmodel = torch.nn.Linear(num_features, 1)\ncriterion = torch.nn.MSELoss()\noptimizer = torch.optim.Adam(model.parameters())\nscheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [3, 6])\n\n# model training\nrunner = dl.SupervisedRunner()\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    scheduler=scheduler,\n    loaders=loaders,\n    logdir=\"./logdir\",\n    valid_loader=\"valid\",\n    valid_metric=\"loss\",\n    minimize_valid_metric=True,\n    num_epochs=8,\n    verbose=True,\n)\n```\n\u003c/p\u003e\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003eML - multiclass classification\u003c/summary\u003e\n\u003cp\u003e\n\n```python\nimport torch\nfrom torch.utils.data import DataLoader, TensorDataset\nfrom catalyst import dl\n\n# sample data\nnum_samples, num_features, num_classes = int(1e4), int(1e1), 4\nX = torch.rand(num_samples, num_features)\ny = (torch.rand(num_samples,) * num_classes).to(torch.int64)\n\n# pytorch loaders\ndataset = TensorDataset(X, y)\nloader = DataLoader(dataset, batch_size=32, num_workers=1)\nloaders = {\"train\": loader, \"valid\": loader}\n\n# model, criterion, optimizer, scheduler\nmodel = torch.nn.Linear(num_features, num_classes)\ncriterion = torch.nn.CrossEntropyLoss()\noptimizer = torch.optim.Adam(model.parameters())\nscheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [2])\n\n# model training\nrunner = dl.SupervisedRunner(\n    input_key=\"features\", output_key=\"logits\", target_key=\"targets\", loss_key=\"loss\"\n)\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    scheduler=scheduler,\n    loaders=loaders,\n    logdir=\"./logdir\",\n    num_epochs=3,\n    valid_loader=\"valid\",\n    valid_metric=\"accuracy03\",\n    minimize_valid_metric=False,\n    verbose=True,\n    callbacks=[\n        dl.AccuracyCallback(input_key=\"logits\", target_key=\"targets\", num_classes=num_classes),\n        # uncomment for extra metrics:\n        # dl.PrecisionRecallF1SupportCallback(\n        #     input_key=\"logits\", target_key=\"targets\", num_classes=num_classes\n        # ),\n        # dl.AUCCallback(input_key=\"logits\", target_key=\"targets\"),\n        # catalyst[ml] required ``pip install catalyst[ml]``\n        # dl.ConfusionMatrixCallback(\n        #     input_key=\"logits\", target_key=\"targets\", num_classes=num_classes\n        # ),\n    ],\n)\n```\n\u003c/p\u003e\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003eML - multilabel classification\u003c/summary\u003e\n\u003cp\u003e\n\n```python\nimport torch\nfrom torch.utils.data import DataLoader, TensorDataset\nfrom catalyst import dl\n\n# sample data\nnum_samples, num_features, num_classes = int(1e4), int(1e1), 4\nX = torch.rand(num_samples, num_features)\ny = (torch.rand(num_samples, num_classes) \u003e 0.5).to(torch.float32)\n\n# pytorch loaders\ndataset = TensorDataset(X, y)\nloader = DataLoader(dataset, batch_size=32, num_workers=1)\nloaders = {\"train\": loader, \"valid\": loader}\n\n# model, criterion, optimizer, scheduler\nmodel = torch.nn.Linear(num_features, num_classes)\ncriterion = torch.nn.BCEWithLogitsLoss()\noptimizer = torch.optim.Adam(model.parameters())\nscheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [2])\n\n# model training\nrunner = dl.SupervisedRunner(\n    input_key=\"features\", output_key=\"logits\", target_key=\"targets\", loss_key=\"loss\"\n)\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    scheduler=scheduler,\n    loaders=loaders,\n    logdir=\"./logdir\",\n    num_epochs=3,\n    valid_loader=\"valid\",\n    valid_metric=\"accuracy01\",\n    minimize_valid_metric=False,\n    verbose=True,\n    callbacks=[\n        dl.BatchTransformCallback(\n            transform=torch.sigmoid,\n            scope=\"on_batch_end\",\n            input_key=\"logits\",\n            output_key=\"scores\"\n        ),\n        dl.AUCCallback(input_key=\"scores\", target_key=\"targets\"),\n        # uncomment for extra metrics:\n        # dl.MultilabelAccuracyCallback(input_key=\"scores\", target_key=\"targets\", threshold=0.5),\n        # dl.MultilabelPrecisionRecallF1SupportCallback(\n        #     input_key=\"scores\", target_key=\"targets\", threshold=0.5\n        # ),\n    ]\n)\n```\n\u003c/p\u003e\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003eML - multihead classification\u003c/summary\u003e\n\u003cp\u003e\n\n```python\nimport torch\nfrom torch import nn, optim\nfrom torch.utils.data import DataLoader, TensorDataset\nfrom catalyst import dl\n\n# sample data\nnum_samples, num_features, num_classes1, num_classes2 = int(1e4), int(1e1), 4, 10\nX = torch.rand(num_samples, num_features)\ny1 = (torch.rand(num_samples,) * num_classes1).to(torch.int64)\ny2 = (torch.rand(num_samples,) * num_classes2).to(torch.int64)\n\n# pytorch loaders\ndataset = TensorDataset(X, y1, y2)\nloader = DataLoader(dataset, batch_size=32, num_workers=1)\nloaders = {\"train\": loader, \"valid\": loader}\n\nclass CustomModule(nn.Module):\n    def __init__(self, in_features: int, out_features1: int, out_features2: int):\n        super().__init__()\n        self.shared = nn.Linear(in_features, 128)\n        self.head1 = nn.Linear(128, out_features1)\n        self.head2 = nn.Linear(128, out_features2)\n\n    def forward(self, x):\n        x = self.shared(x)\n        y1 = self.head1(x)\n        y2 = self.head2(x)\n        return y1, y2\n\n# model, criterion, optimizer, scheduler\nmodel = CustomModule(num_features, num_classes1, num_classes2)\ncriterion = nn.CrossEntropyLoss()\noptimizer = optim.Adam(model.parameters())\nscheduler = optim.lr_scheduler.MultiStepLR(optimizer, [2])\n\nclass CustomRunner(dl.Runner):\n    def handle_batch(self, batch):\n        x, y1, y2 = batch\n        y1_hat, y2_hat = self.model(x)\n        self.batch = {\n            \"features\": x,\n            \"logits1\": y1_hat,\n            \"logits2\": y2_hat,\n            \"targets1\": y1,\n            \"targets2\": y2,\n        }\n\n# model training\nrunner = CustomRunner()\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    scheduler=scheduler,\n    loaders=loaders,\n    num_epochs=3,\n    verbose=True,\n    callbacks=[\n        dl.CriterionCallback(metric_key=\"loss1\", input_key=\"logits1\", target_key=\"targets1\"),\n        dl.CriterionCallback(metric_key=\"loss2\", input_key=\"logits2\", target_key=\"targets2\"),\n        dl.MetricAggregationCallback(metric_key=\"loss\", metrics=[\"loss1\", \"loss2\"], mode=\"mean\"),\n        dl.BackwardCallback(metric_key=\"loss\"),\n        dl.OptimizerCallback(metric_key=\"loss\"),\n        dl.SchedulerCallback(),\n        dl.AccuracyCallback(\n            input_key=\"logits1\", target_key=\"targets1\", num_classes=num_classes1, prefix=\"one_\"\n        ),\n        dl.AccuracyCallback(\n            input_key=\"logits2\", target_key=\"targets2\", num_classes=num_classes2, prefix=\"two_\"\n        ),\n        # catalyst[ml] required ``pip install catalyst[ml]``\n        # dl.ConfusionMatrixCallback(\n        #     input_key=\"logits1\", target_key=\"targets1\", num_classes=num_classes1, prefix=\"one_cm\"\n        # ),\n        # dl.ConfusionMatrixCallback(\n        #     input_key=\"logits2\", target_key=\"targets2\", num_classes=num_classes2, prefix=\"two_cm\"\n        # ),\n        dl.CheckpointCallback(\n            logdir=\"./logs/one\",\n            loader_key=\"valid\", metric_key=\"one_accuracy01\", minimize=False, topk=1\n        ),\n        dl.CheckpointCallback(\n            logdir=\"./logs/two\",\n            loader_key=\"valid\", metric_key=\"two_accuracy03\", minimize=False, topk=3\n        ),\n    ],\n    loggers={\"console\": dl.ConsoleLogger(), \"tb\": dl.TensorboardLogger(\"./logs/tb\")},\n)\n```\n\u003c/p\u003e\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003eML – RecSys\u003c/summary\u003e\n\u003cp\u003e\n\n```python\nimport torch\nfrom torch.utils.data import DataLoader, TensorDataset\nfrom catalyst import dl\n\n# sample data\nnum_users, num_features, num_items = int(1e4), int(1e1), 10\nX = torch.rand(num_users, num_features)\ny = (torch.rand(num_users, num_items) \u003e 0.5).to(torch.float32)\n\n# pytorch loaders\ndataset = TensorDataset(X, y)\nloader = DataLoader(dataset, batch_size=32, num_workers=1)\nloaders = {\"train\": loader, \"valid\": loader}\n\n# model, criterion, optimizer, scheduler\nmodel = torch.nn.Linear(num_features, num_items)\ncriterion = torch.nn.BCEWithLogitsLoss()\noptimizer = torch.optim.Adam(model.parameters())\nscheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [2])\n\n# model training\nrunner = dl.SupervisedRunner(\n    input_key=\"features\", output_key=\"logits\", target_key=\"targets\", loss_key=\"loss\"\n)\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    scheduler=scheduler,\n    loaders=loaders,\n    num_epochs=3,\n    verbose=True,\n    callbacks=[\n        dl.BatchTransformCallback(\n            transform=torch.sigmoid,\n            scope=\"on_batch_end\",\n            input_key=\"logits\",\n            output_key=\"scores\"\n        ),\n        dl.CriterionCallback(input_key=\"logits\", target_key=\"targets\", metric_key=\"loss\"),\n        # uncomment for extra metrics:\n        # dl.AUCCallback(input_key=\"scores\", target_key=\"targets\"),\n        # dl.HitrateCallback(input_key=\"scores\", target_key=\"targets\", topk=(1, 3, 5)),\n        # dl.MRRCallback(input_key=\"scores\", target_key=\"targets\", topk=(1, 3, 5)),\n        # dl.MAPCallback(input_key=\"scores\", target_key=\"targets\", topk=(1, 3, 5)),\n        # dl.NDCGCallback(input_key=\"scores\", target_key=\"targets\", topk=(1, 3, 5)),\n        dl.BackwardCallback(metric_key=\"loss\"),\n        dl.OptimizerCallback(metric_key=\"loss\"),\n        dl.SchedulerCallback(),\n        dl.CheckpointCallback(\n            logdir=\"./logs\", loader_key=\"valid\", metric_key=\"loss\", minimize=True\n        ),\n    ]\n)\n```\n\u003c/p\u003e\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003eCV - MNIST classification\u003c/summary\u003e\n\u003cp\u003e\n\n```python\nimport os\nfrom torch import nn, optim\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl\nfrom catalyst.contrib.datasets import MNIST\n\nmodel = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10))\ncriterion = nn.CrossEntropyLoss()\noptimizer = optim.Adam(model.parameters(), lr=0.02)\n\ntrain_data = MNIST(os.getcwd(), train=True)\nvalid_data = MNIST(os.getcwd(), train=False)\nloaders = {\n    \"train\": DataLoader(train_data, batch_size=32),\n    \"valid\": DataLoader(valid_data, batch_size=32),\n}\n\nrunner = dl.SupervisedRunner()\n# model training\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    loaders=loaders,\n    num_epochs=1,\n    logdir=\"./logs\",\n    valid_loader=\"valid\",\n    valid_metric=\"loss\",\n    minimize_valid_metric=True,\n    verbose=True,\n# uncomment for extra metrics:\n#     callbacks=[\n#         dl.AccuracyCallback(input_key=\"logits\", target_key=\"targets\", num_classes=10),\n#         dl.PrecisionRecallF1SupportCallback(\n#             input_key=\"logits\", target_key=\"targets\", num_classes=10\n#         ),\n#         dl.AUCCallback(input_key=\"logits\", target_key=\"targets\"),\n#         # catalyst[ml] required ``pip install catalyst[ml]``\n#         dl.ConfusionMatrixCallback(\n#             input_key=\"logits\", target_key=\"targets\", num_classes=num_classes\n#         ),\n#     ]\n)\n```\n\u003c/p\u003e\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003eCV - MNIST segmentation\u003c/summary\u003e\n\u003cp\u003e\n\n```python\nimport os\nimport torch\nfrom torch import nn\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl\nfrom catalyst.contrib.datasets import MNIST\nfrom catalyst.contrib.losses import IoULoss\n\n\nmodel = nn.Sequential(\n    nn.Conv2d(1, 1, 3, 1, 1), nn.ReLU(),\n    nn.Conv2d(1, 1, 3, 1, 1), nn.Sigmoid(),\n)\ncriterion = IoULoss()\noptimizer = torch.optim.Adam(model.parameters(), lr=0.02)\n\ntrain_data = MNIST(os.getcwd(), train=True)\nvalid_data = MNIST(os.getcwd(), train=False)\nloaders = {\n    \"train\": DataLoader(train_data, batch_size=32),\n    \"valid\": DataLoader(valid_data, batch_size=32),\n}\n\nclass CustomRunner(dl.SupervisedRunner):\n    def handle_batch(self, batch):\n        x = batch[self._input_key]\n        x_noise = (x + torch.rand_like(x)).clamp_(0, 1)\n        x_ = self.model(x_noise)\n        self.batch = {self._input_key: x, self._output_key: x_, self._target_key: x}\n\nrunner = CustomRunner(\n    input_key=\"features\", output_key=\"scores\", target_key=\"targets\", loss_key=\"loss\"\n)\n# model training\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    loaders=loaders,\n    num_epochs=1,\n    callbacks=[\n        dl.IOUCallback(input_key=\"scores\", target_key=\"targets\"),\n        dl.DiceCallback(input_key=\"scores\", target_key=\"targets\"),\n        dl.TrevskyCallback(input_key=\"scores\", target_key=\"targets\", alpha=0.2),\n    ],\n    logdir=\"./logdir\",\n    valid_loader=\"valid\",\n    valid_metric=\"loss\",\n    minimize_valid_metric=True,\n    verbose=True,\n)\n```\n\u003c/p\u003e\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003eCV - MNIST metric learning\u003c/summary\u003e\n\u003cp\u003e\n\n```python\nimport os\nfrom torch.optim import Adam\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl\nfrom catalyst.contrib.data import HardTripletsSampler\nfrom catalyst.contrib.datasets import MnistMLDataset, MnistQGDataset\nfrom catalyst.contrib.losses import TripletMarginLossWithSampler\nfrom catalyst.contrib.models import MnistSimpleNet\nfrom catalyst.data.sampler import BatchBalanceClassSampler\n\n\n# 1. train and valid loaders\ntrain_dataset = MnistMLDataset(root=os.getcwd())\nsampler = BatchBalanceClassSampler(\n    labels=train_dataset.get_labels(), num_classes=5, num_samples=10, num_batches=10\n)\ntrain_loader = DataLoader(dataset=train_dataset, batch_sampler=sampler)\n\nvalid_dataset = MnistQGDataset(root=os.getcwd(), gallery_fraq=0.2)\nvalid_loader = DataLoader(dataset=valid_dataset, batch_size=1024)\n\n# 2. model and optimizer\nmodel = MnistSimpleNet(out_features=16)\noptimizer = Adam(model.parameters(), lr=0.001)\n\n# 3. criterion with triplets sampling\nsampler_inbatch = HardTripletsSampler(norm_required=False)\ncriterion = TripletMarginLossWithSampler(margin=0.5, sampler_inbatch=sampler_inbatch)\n\n# 4. training with catalyst Runner\nclass CustomRunner(dl.SupervisedRunner):\n    def handle_batch(self, batch) -\u003e None:\n        if self.is_train_loader:\n            images, targets = batch[\"features\"].float(), batch[\"targets\"].long()\n            features = self.model(images)\n            self.batch = {\"embeddings\": features, \"targets\": targets,}\n        else:\n            images, targets, is_query = \\\n                batch[\"features\"].float(), batch[\"targets\"].long(), batch[\"is_query\"].bool()\n            features = self.model(images)\n            self.batch = {\"embeddings\": features, \"targets\": targets, \"is_query\": is_query}\n\ncallbacks = [\n    dl.ControlFlowCallbackWrapper(\n        dl.CriterionCallback(input_key=\"embeddings\", target_key=\"targets\", metric_key=\"loss\"),\n        loaders=\"train\",\n    ),\n    dl.ControlFlowCallbackWrapper(\n        dl.CMCScoreCallback(\n            embeddings_key=\"embeddings\",\n            labels_key=\"targets\",\n            is_query_key=\"is_query\",\n            topk=[1],\n        ),\n        loaders=\"valid\",\n    ),\n    dl.PeriodicLoaderCallback(\n        valid_loader_key=\"valid\", valid_metric_key=\"cmc01\", minimize=False, valid=2\n    ),\n]\n\nrunner = CustomRunner(input_key=\"features\", output_key=\"embeddings\")\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    callbacks=callbacks,\n    loaders={\"train\": train_loader, \"valid\": valid_loader},\n    verbose=False,\n    logdir=\"./logs\",\n    valid_loader=\"valid\",\n    valid_metric=\"cmc01\",\n    minimize_valid_metric=False,\n    num_epochs=10,\n)\n```\n\u003c/p\u003e\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003eCV - MNIST GAN\u003c/summary\u003e\n\u003cp\u003e\n\n```python\nimport os\nimport torch\nfrom torch import nn\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl\nfrom catalyst.contrib.datasets import MNIST\nfrom catalyst.contrib.layers import GlobalMaxPool2d, Lambda\n\nlatent_dim = 128\ngenerator = nn.Sequential(\n    # We want to generate 128 coefficients to reshape into a 7x7x128 map\n    nn.Linear(128, 128 * 7 * 7),\n    nn.LeakyReLU(0.2, inplace=True),\n    Lambda(lambda x: x.view(x.size(0), 128, 7, 7)),\n    nn.ConvTranspose2d(128, 128, (4, 4), stride=(2, 2), padding=1),\n    nn.LeakyReLU(0.2, inplace=True),\n    nn.ConvTranspose2d(128, 128, (4, 4), stride=(2, 2), padding=1),\n    nn.LeakyReLU(0.2, inplace=True),\n    nn.Conv2d(128, 1, (7, 7), padding=3),\n    nn.Sigmoid(),\n)\ndiscriminator = nn.Sequential(\n    nn.Conv2d(1, 64, (3, 3), stride=(2, 2), padding=1),\n    nn.LeakyReLU(0.2, inplace=True),\n    nn.Conv2d(64, 128, (3, 3), stride=(2, 2), padding=1),\n    nn.LeakyReLU(0.2, inplace=True),\n    GlobalMaxPool2d(),\n    nn.Flatten(),\n    nn.Linear(128, 1),\n)\n\nmodel = nn.ModuleDict({\"generator\": generator, \"discriminator\": discriminator})\ncriterion = {\"generator\": nn.BCEWithLogitsLoss(), \"discriminator\": nn.BCEWithLogitsLoss()}\noptimizer = {\n    \"generator\": torch.optim.Adam(generator.parameters(), lr=0.0003, betas=(0.5, 0.999)),\n    \"discriminator\": torch.optim.Adam(discriminator.parameters(), lr=0.0003, betas=(0.5, 0.999)),\n}\ntrain_data = MNIST(os.getcwd(), train=False)\nloaders = {\"train\": DataLoader(train_data, batch_size=32)}\n\nclass CustomRunner(dl.Runner):\n    def predict_batch(self, batch):\n        batch_size = 1\n        # Sample random points in the latent space\n        random_latent_vectors = torch.randn(batch_size, latent_dim).to(self.engine.device)\n        # Decode them to fake images\n        generated_images = self.model[\"generator\"](random_latent_vectors).detach()\n        return generated_images\n\n    def handle_batch(self, batch):\n        real_images, _ = batch\n        batch_size = real_images.shape[0]\n\n        # Sample random points in the latent space\n        random_latent_vectors = torch.randn(batch_size, latent_dim).to(self.engine.device)\n\n        # Decode them to fake images\n        generated_images = self.model[\"generator\"](random_latent_vectors).detach()\n        # Combine them with real images\n        combined_images = torch.cat([generated_images, real_images])\n\n        # Assemble labels discriminating real from fake images\n        labels = \\\n            torch.cat([torch.ones((batch_size, 1)), torch.zeros((batch_size, 1))]).to(self.engine.device)\n        # Add random noise to the labels - important trick!\n        labels += 0.05 * torch.rand(labels.shape).to(self.engine.device)\n\n        # Discriminator forward\n        combined_predictions = self.model[\"discriminator\"](combined_images)\n\n        # Sample random points in the latent space\n        random_latent_vectors = torch.randn(batch_size, latent_dim).to(self.engine.device)\n        # Assemble labels that say \"all real images\"\n        misleading_labels = torch.zeros((batch_size, 1)).to(self.engine.device)\n\n        # Generator forward\n        generated_images = self.model[\"generator\"](random_latent_vectors)\n        generated_predictions = self.model[\"discriminator\"](generated_images)\n\n        self.batch = {\n            \"combined_predictions\": combined_predictions,\n            \"labels\": labels,\n            \"generated_predictions\": generated_predictions,\n            \"misleading_labels\": misleading_labels,\n        }\n\n\nrunner = CustomRunner()\nrunner.train(\n    model=model,\n    criterion=criterion,\n    optimizer=optimizer,\n    loaders=loaders,\n    callbacks=[\n        dl.CriterionCallback(\n            input_key=\"combined_predictions\",\n            target_key=\"labels\",\n            metric_key=\"loss_discriminator\",\n            criterion_key=\"discriminator\",\n        ),\n        dl.BackwardCallback(metric_key=\"loss_discriminator\"),\n        dl.OptimizerCallback(\n            optimizer_key=\"discriminator\",\n            metric_key=\"loss_discriminator\",\n        ),\n        dl.CriterionCallback(\n            input_key=\"generated_predictions\",\n            target_key=\"misleading_labels\",\n            metric_key=\"loss_generator\",\n            criterion_key=\"generator\",\n        ),\n        dl.BackwardCallback(metric_key=\"loss_generator\"),\n        dl.OptimizerCallback(\n            optimizer_key=\"generator\",\n            metric_key=\"loss_generator\",\n        ),\n    ],\n    valid_loader=\"train\",\n    valid_metric=\"loss_generator\",\n    minimize_valid_metric=True,\n    num_epochs=20,\n    verbose=True,\n    logdir=\"./logs_gan\",\n)\n\n# visualization (matplotlib required):\n# import matplotlib.pyplot as plt\n# %matplotlib inline\n# plt.imshow(runner.predict_batch(None)[0, 0].cpu().numpy())\n```\n\u003c/p\u003e\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003eCV - MNIST VAE\u003c/summary\u003e\n\u003cp\u003e\n\n```python\nimport os\nimport torch\nfrom torch import nn, optim\nfrom torch.nn import functional as F\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl, metrics\nfrom catalyst.contrib.datasets import MNIST\n\nLOG_SCALE_MAX = 2\nLOG_SCALE_MIN = -10\n\ndef normal_sample(loc, log_scale):\n    scale = torch.exp(0.5 * log_scale)\n    return loc + scale * torch.randn_like(scale)\n\nclass VAE(nn.Module):\n    def __init__(self, in_features, hid_features):\n        super().__init__()\n        self.hid_features = hid_features\n        self.encoder = nn.Linear(in_features, hid_features * 2)\n        self.decoder = nn.Sequential(nn.Linear(hid_features, in_features), nn.Sigmoid())\n\n    def forward(self, x, deterministic=False):\n        z = self.encoder(x)\n        bs, z_dim = z.shape\n\n        loc, log_scale = z[:, : z_dim // 2], z[:, z_dim // 2 :]\n        log_scale = torch.clamp(log_scale, LOG_SCALE_MIN, LOG_SCALE_MAX)\n\n        z_ = loc if deterministic else normal_sample(loc, log_scale)\n        z_ = z_.view(bs, -1)\n        x_ = self.decoder(z_)\n\n        return x_, loc, log_scale\n\nclass CustomRunner(dl.IRunner):\n    def __init__(self, hid_features, logdir, engine):\n        super().__init__()\n        self.hid_features = hid_features\n        self._logdir = logdir\n        self._engine = engine\n\n    def get_engine(self):\n        return self._engine\n\n    def get_loggers(self):\n        return {\n            \"console\": dl.ConsoleLogger(),\n            \"csv\": dl.CSVLogger(logdir=self._logdir),\n            \"tensorboard\": dl.TensorboardLogger(logdir=self._logdir),\n        }\n\n    @property\n    def num_epochs(self) -\u003e int:\n        return 1\n\n    def get_loaders(self):\n        loaders = {\n            \"train\": DataLoader(MNIST(os.getcwd(), train=False), batch_size=32),\n            \"valid\": DataLoader(MNIST(os.getcwd(), train=False), batch_size=32),\n        }\n        return loaders\n\n    def get_model(self):\n        model = self.model if self.model is not None else VAE(28 * 28, self.hid_features)\n        return model\n\n    def get_optimizer(self, model):\n        return optim.Adam(model.parameters(), lr=0.02)\n\n    def get_callbacks(self):\n        return {\n            \"backward\": dl.BackwardCallback(metric_key=\"loss\"),\n            \"optimizer\": dl.OptimizerCallback(metric_key=\"loss\"),\n            \"checkpoint\": dl.CheckpointCallback(\n                self._logdir,\n                loader_key=\"valid\",\n                metric_key=\"loss\",\n                minimize=True,\n                topk=3,\n            ),\n        }\n\n    def on_loader_start(self, runner):\n        super().on_loader_start(runner)\n        self.meters = {\n            key: metrics.AdditiveMetric(compute_on_call=False)\n            for key in [\"loss_ae\", \"loss_kld\", \"loss\"]\n        }\n\n    def handle_batch(self, batch):\n        x, _ = batch\n        x = x.view(x.size(0), -1)\n        x_, loc, log_scale = self.model(x, deterministic=not self.is_train_loader)\n\n        loss_ae = F.mse_loss(x_, x)\n        loss_kld = (\n            -0.5 * torch.sum(1 + log_scale - loc.pow(2) - log_scale.exp(), dim=1)\n        ).mean()\n        loss = loss_ae + loss_kld * 0.01\n\n        self.batch_metrics = {\"loss_ae\": loss_ae, \"loss_kld\": loss_kld, \"loss\": loss}\n        for key in [\"loss_ae\", \"loss_kld\", \"loss\"]:\n            self.meters[key].update(self.batch_metrics[key].item(), self.batch_size)\n\n    def on_loader_end(self, runner):\n        for key in [\"loss_ae\", \"loss_kld\", \"loss\"]:\n            self.loader_metrics[key] = self.meters[key].compute()[0]\n        super().on_loader_end(runner)\n\n    def predict_batch(self, batch):\n        random_latent_vectors = torch.randn(1, self.hid_features).to(self.engine.device)\n        generated_images = self.model.decoder(random_latent_vectors).detach()\n        return generated_images\n\nrunner = CustomRunner(128, \"./logs\", dl.CPUEngine())\nrunner.run()\n# visualization (matplotlib required):\n# import matplotlib.pyplot as plt\n# %matplotlib inline\n# plt.imshow(runner.predict_batch(None)[0].cpu().numpy().reshape(28, 28))\n```\n\u003c/p\u003e\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003eAutoML - hyperparameters optimization with Optuna\u003c/summary\u003e\n\u003cp\u003e\n\n```python\nimport os\nimport optuna\nimport torch\nfrom torch import nn\nfrom torch.utils.data import DataLoader\nfrom catalyst import dl\nfrom catalyst.contrib.datasets import MNIST\n\n\ndef objective(trial):\n    lr = trial.suggest_loguniform(\"lr\", 1e-3, 1e-1)\n    num_hidden = int(trial.suggest_loguniform(\"num_hidden\", 32, 128))\n\n    train_data = MNIST(os.getcwd(), train=True)\n    valid_data = MNIST(os.getcwd(), train=False)\n    loaders = {\n        \"train\": DataLoader(train_data, batch_size=32),\n        \"valid\": DataLoader(valid_data, batch_size=32),\n    }\n    model = nn.Sequential(\n        nn.Flatten(), nn.Linear(784, num_hidden), nn.ReLU(), nn.Linear(num_hidden, 10)\n    )\n    optimizer = torch.optim.Adam(model.parameters(), lr=lr)\n    criterion = nn.CrossEntropyLoss()\n\n    runner = dl.SupervisedRunner(input_key=\"features\", output_key=\"logits\", target_key=\"targets\")\n    runner.train(\n        model=model,\n        criterion=criterion,\n        optimizer=optimizer,\n        loaders=loaders,\n        callbacks={\n            \"accuracy\": dl.AccuracyCallback(\n                input_key=\"logits\", target_key=\"targets\", num_classes=10\n            ),\n            # catalyst[optuna] required ``pip install catalyst[optuna]``\n            \"optuna\": dl.OptunaPruningCallback(\n                loader_key=\"valid\", metric_key=\"accuracy01\", minimize=False, trial=trial\n            ),\n        },\n        num_epochs=3,\n    )\n    score = trial.best_score\n    return score\n\nstudy = optuna.create_study(\n    direction=\"maximize\",\n    pruner=optuna.pruners.MedianPruner(\n        n_startup_trials=1, n_warmup_steps=0, interval_steps=1\n    ),\n)\nstudy.optimize(objective, n_trials=3, timeout=300)\nprint(study.best_value, study.best_params)\n```\n\u003c/p\u003e\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eConfig API - minimal example\u003c/summary\u003e\n\u003cp\u003e\n\n```yaml title=\"example.yaml\"\nrunner:\n  _target_: catalyst.runners.SupervisedRunner\n  model:\n    _var_: model\n    _target_: torch.nn.Sequential\n    args:\n      - _target_: torch.nn.Flatten\n      - _target_: torch.nn.Linear\n        in_features: 784  # 28 * 28\n        out_features: 10\n  input_key: features\n  output_key: \u0026output_key logits\n  target_key: \u0026target_key targets\n  loss_key: \u0026loss_key loss\n\nrun:\n  # ≈ stage 1\n  - _call_: train  # runner.train(...)\n\n    criterion:\n      _target_: torch.nn.CrossEntropyLoss\n\n    optimizer:\n      _target_: torch.optim.Adam\n      params:  # model.parameters()\n        _var_: model.parameters\n      lr: 0.02\n\n    loaders:\n      train:\n        _target_: torch.utils.data.DataLoader\n        dataset:\n          _target_: catalyst.contrib.datasets.MNIST\n          root: data\n          train: y\n        batch_size: 32\n\n      \u0026valid_loader_key valid:\n        \u0026valid_loader\n        _target_: torch.utils.data.DataLoader\n        dataset:\n          _target_: catalyst.contrib.datasets.MNIST\n          root: data\n          train: n\n        batch_size: 32\n\n    callbacks:\n      - \u0026accuracy_metric\n        _target_: catalyst.callbacks.AccuracyCallback\n        input_key: *output_key\n        target_key: *target_key\n        topk: [1,3,5]\n      - _target_: catalyst.callbacks.PrecisionRecallF1SupportCallback\n        input_key: *output_key\n        target_key: *target_key\n\n    num_epochs: 1\n    logdir: logs\n    valid_loader: *valid_loader_key\n    valid_metric: *loss_key\n    minimize_valid_metric: y\n    verbose: y\n\n  # ≈ stage 2\n  - _call_: evaluate_loader  # runner.evaluate_loader(...)\n    loader: *valid_loader\n    callbacks:\n      - *accuracy_metric\n\n```\n\n```sh\ncatalyst-run --config example.yaml\n```\n\u003c/p\u003e\n\u003c/details\u003e\n\n### Tests\nAll Catalyst code, features, and pipelines [are fully tested](./tests).\nWe also have our own [catalyst-codestyle](https://github.com/catalyst-team/codestyle) and a corresponding pre-commit hook.\nDuring testing, we train a variety of different models: image classification,\nimage segmentation, text classification, GANs, and much more.\nWe then compare their convergence metrics in order to verify\nthe correctness of the training procedure and its reproducibility.\nAs a result, Catalyst provides fully tested and reproducible\nbest practices for your deep learning research and development.\n\n### [Blog Posts](https://catalyst-team.com/post/)\n\n### [Talks](https://catalyst-team.com/talk/)\n\n\n## Community\n\n### Accelerated with Catalyst\n\n\u003cdetails\u003e\n\u003csummary\u003eResearch Papers\u003c/summary\u003e\n\u003cp\u003e\n\n- [Hierarchical Attention for Sentiment Classification with Visualization](https://github.com/neuromation/ml-recipe-hier-attention)\n- [Pediatric Bone Age Assessment](https://github.com/neuromation/ml-recipe-bone-age)\n- [Implementation of the paper \"Tell Me Where to Look: Guided Attention Inference Network\"](https://github.com/ngxbac/GAIN)\n- [Implementation of the paper \"Filter Response Normalization Layer: Eliminating Batch Dependence in the Training of Deep Neural Networks\"](https://github.com/yukkyo/PyTorch-FilterResponseNormalizationLayer)\n- [Implementation of the paper \"Utterance-level Aggregation For Speaker Recognition In The Wild\"](https://github.com/ptJexio/Speaker-Recognition)\n- [Implementation of the paper \"Looking to Listen at the Cocktail Party: A Speaker-Independent Audio-Visual Model for Speech Separation\"](https://github.com/vitrioil/Speech-Separation)\n- [Implementation of the paper \"ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks\"](https://github.com/leverxgroup/esrgan)\n\n\u003c/p\u003e\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eBlog Posts\u003c/summary\u003e\n\u003cp\u003e\n\n- [Solving the Cocktail Party Problem using PyTorch](https://medium.com/pytorch/addressing-the-cocktail-party-problem-using-pytorch-305fb74560ea)\n- [Beyond fashion: Deep Learning with Catalyst (Config API)](https://evilmartians.com/chronicles/beyond-fashion-deep-learning-with-catalyst)\n- [Tutorial from Notebook API to Config API (RU)](https://github.com/Bekovmi/Segmentation_tutorial)\n\n\u003c/p\u003e\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eCompetitions\u003c/summary\u003e\n\u003cp\u003e\n\n- [Kaggle Quick, Draw! Doodle Recognition Challenge](https://github.com/ngxbac/Kaggle-QuickDraw) - 11th place\n- [Catalyst.RL - NeurIPS 2018: AI for Prosthetics Challenge](https://github.com/Scitator/neurips-18-prosthetics-challenge) – 3rd place\n- [Kaggle Google Landmark 2019](https://github.com/ngxbac/Kaggle-Google-Landmark-2019) - 30th place\n- [iMet Collection 2019 - FGVC6](https://github.com/ngxbac/Kaggle-iMet) - 24th place\n- [ID R\u0026D Anti-spoofing Challenge](https://github.com/bagxi/idrnd-anti-spoofing-challenge-solution) - 14th place\n- [NeurIPS 2019: Recursion Cellular Image Classification](https://github.com/ngxbac/Kaggle-Recursion-Cellular) - 4th place\n- [MICCAI 2019: Automatic Structure Segmentation for Radiotherapy Planning Challenge 2019](https://github.com/ngxbac/StructSeg2019)\n  * 3rd place solution for `Task 3: Organ-at-risk segmentation from chest CT scans`\n  * and 4th place solution for `Task 4: Gross Target Volume segmentation of lung cancer`\n- [Kaggle Seversteal steel detection](https://github.com/bamps53/kaggle-severstal) - 5th place\n- [RSNA Intracranial Hemorrhage Detection](https://github.com/ngxbac/Kaggle-RSNA) - 5th place\n- [APTOS 2019 Blindness Detection](https://github.com/BloodAxe/Kaggle-2019-Blindness-Detection) – 7th place\n- [Catalyst.RL - NeurIPS 2019: Learn to Move - Walk Around](https://github.com/Scitator/run-skeleton-run-in-3d) – 2nd place\n- [xView2 Damage Assessment Challenge](https://github.com/BloodAxe/xView2-Solution) - 3rd place\n\n\n\u003c/p\u003e\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eToolkits\u003c/summary\u003e\n\u003cp\u003e\n\n- [Catalyst.RL](https://github.com/Scitator/catalyst-rl-framework) – A Distributed Framework for Reproducible RL Research by [Scitator](https://github.com/Scitator)\n- [Catalyst.Classification](https://github.com/catalyst-team/classification) - Comprehensive classification pipeline with Pseudo-Labeling by [Bagxi](https://github.com/bagxi) and [Pdanilov](https://github.com/pdanilov)\n- [Catalyst.Segmentation](https://github.com/catalyst-team/segmentation) - Segmentation pipelines - binary, semantic and instance, by [Bagxi](https://github.com/bagxi)\n- [Catalyst.Detection](https://github.com/catalyst-team/detection) - Anchor-free detection pipeline by [Avi2011class](https://github.com/Avi2011class) and [TezRomacH](https://github.com/TezRomacH)\n- [Catalyst.GAN](https://github.com/catalyst-team/gan) - Reproducible GANs pipelines by [Asmekal](https://github.com/asmekal)\n- [Catalyst.Neuro](https://github.com/catalyst-team/neuro) - Brain image analysis project, in collaboration with [TReNDS Center](https://trendscenter.org)\n- [MLComp](https://github.com/catalyst-team/mlcomp) – Distributed DAG framework for machine learning with UI by [Lightforever](https://github.com/lightforever)\n- [Pytorch toolbelt](https://github.com/BloodAxe/pytorch-toolbelt) - PyTorch extensions for fast R\u0026D prototyping and Kaggle farming by [BloodAxe](https://github.com/BloodAxe)\n- [Helper functions](https://github.com/ternaus/iglovikov_helper_functions) - An assorted collection of helper functions by [Ternaus](https://github.com/ternaus)\n- [BERT Distillation with Catalyst](https://github.com/elephantmipt/bert-distillation) by [elephantmipt](https://github.com/elephantmipt)\n\n\u003c/p\u003e\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003eOther\u003c/summary\u003e\n\u003cp\u003e\n\n- [CamVid Segmentation Example](https://github.com/BloodAxe/Catalyst-CamVid-Segmentation-Example) - Example of semantic segmentation for CamVid dataset\n- [Notebook API tutorial for segmentation in Understanding Clouds from Satellite Images Competition](https://www.kaggle.com/artgor/segmentation-in-pytorch-using-convenient-tools/)\n- [Catalyst.RL - NeurIPS 2019: Learn to Move - Walk Around](https://github.com/Scitator/learning-to-move-starter-kit) – starter kit\n- [Catalyst.RL - NeurIPS 2019: Animal-AI Olympics](https://github.com/Scitator/animal-olympics-starter-kit) - starter kit\n- [Inria Segmentation Example](https://github.com/BloodAxe/Catalyst-Inria-Segmentation-Example) - An example of training segmentation model for Inria Sattelite Segmentation Challenge\n- [iglovikov_segmentation](https://github.com/ternaus/iglovikov_segmentation) - Semantic segmentation pipeline using Catalyst\n- [Logging Catalyst Runs to Comet](https://colab.research.google.com/drive/1TaG27HcMh2jyRKBGsqRXLiGUfsHVyCq6?usp=sharing) - An example of how to log metrics, hyperparameters and more from Catalyst runs to [Comet](https://www.comet.ml/site/data-scientists/)\n\n\u003c/p\u003e\n\u003c/details\u003e\n\n\nSee other projects at [the GitHub dependency graph](https://github.com/catalyst-team/catalyst/network/dependents).\n\nIf your project implements a paper,\na notable use-case/tutorial, or a Kaggle competition solution, or\nif your code simply presents interesting results and uses Catalyst,\nwe would be happy to add your project to the list above!\nDo not hesitate to send us a PR with a brief description of the project similar to the above.\n\n### Contribution Guide\n\nWe appreciate all contributions.\nIf you are planning to contribute back bug-fixes, there is no need to run that by us; just send a PR.\nIf you plan to contribute new features, new utility functions, or extensions,\nplease open an issue first and discuss it with us.\n\n- Please see the [Contribution Guide](CONTRIBUTING.md) for more information.\n- By participating in this project, you agree to abide by its [Code of Conduct](CODE_OF_CONDUCT.md).\n\n\n### User Feedback\n\nWe've created `feedback@catalyst-team.com` as an additional channel for user feedback.\n\n- If you like the project and want to thank us, this is the right place.\n- If you would like to start a collaboration between your team and Catalyst team to improve Deep Learning R\u0026D, you are always welcome.\n- If you don't like Github Issues and prefer email, feel free to email us.\n- Finally, if you do not like something, please, share it with us, and we can see how to improve it.\n\nWe appreciate any type of feedback. Thank you!\n\n\n### Acknowledgments\n\nSince the beginning of the Сatalyst development, a lot of people have influenced it in a lot of different ways.\n\n#### Catalyst.Team\n- [Dmytro Doroshenko](https://www.linkedin.com/in/dmytro-doroshenko-05671112a/) ([ditwoo](https://github.com/Ditwoo))\n- [Eugene Kachan](https://www.linkedin.com/in/yauheni-kachan/) ([bagxi](https://github.com/bagxi))\n- [Nikita Balagansky](https://www.linkedin.com/in/nikita-balagansky-50414a19a/) ([elephantmipt](https://github.com/elephantmipt))\n- [Sergey Kolesnikov](https://www.scitator.com/) ([scitator](https://github.com/Scitator))\n\n#### Catalyst.Contributors\n- [Aleksey Grinchuk](https://www.facebook.com/grinchuk.alexey) ([alexgrinch](https://github.com/AlexGrinch))\n- [Aleksey Shabanov](https://linkedin.com/in/aleksey-shabanov-96b351189) ([AlekseySh](https://github.com/AlekseySh))\n- [Alex Gaziev](https://www.linkedin.com/in/alexgaziev/) ([gazay](https://github.com/gazay))\n- [Andrey Zharkov](https://www.linkedin.com/in/andrey-zharkov-8554a1153/) ([asmekal](https://github.com/asmekal))\n- [Artem Zolkin](https://www.linkedin.com/in/artem-zolkin-b5155571/) ([arquestro](https://github.com/Arquestro))\n- [David Kuryakin](https://www.linkedin.com/in/dkuryakin/) ([dkuryakin](https://github.com/dkuryakin))\n- [Evgeny Semyonov](https://www.linkedin.com/in/ewan-semyonov/) ([lightforever](https://github.com/lightforever))\n- [Eugene Khvedchenya](https://www.linkedin.com/in/cvtalks/) ([bloodaxe](https://github.com/BloodAxe))\n- [Ivan Stepanenko](https://www.facebook.com/istepanenko)\n- [Julia Shenshina](https://github.com/julia-shenshina) ([julia-shenshina](https://github.com/julia-shenshina))\n- [Nguyen Xuan Bac](https://www.linkedin.com/in/bac-nguyen-xuan-70340b66/) ([ngxbac](https://github.com/ngxbac))\n- [Roman Tezikov](http://linkedin.com/in/roman-tezikov/) ([TezRomacH](https://github.com/TezRomacH))\n- [Valentin Khrulkov](https://www.linkedin.com/in/vkhrulkov/) ([khrulkovv](https://github.com/KhrulkovV))\n- [Vladimir Iglovikov](https://www.linkedin.com/in/iglovikov/) ([ternaus](https://github.com/ternaus))\n- [Vsevolod Poletaev](https://linkedin.com/in/vsevolod-poletaev-468071165) ([hexfaker](https://github.com/hexfaker))\n- [Yury Kashnitsky](https://www.linkedin.com/in/kashnitskiy/) ([yorko](https://github.com/Yorko))\n\n\n### Trusted by\n- [Awecom](https://www.awecom.com)\n- Researchers at the [Center for Translational Research in Neuroimaging and Data Science (TReNDS)](https://trendscenter.org)\n- [Deep Learning School](https://en.dlschool.org)\n- Researchers at [Emory University](https://www.emory.edu)\n- [Evil Martians](https://evilmartians.com)\n- Researchers at the [Georgia Institute of Technology](https://www.gatech.edu)\n- Researchers at [Georgia State University](https://www.gsu.edu)\n- [Helios](http://helios.to)\n- [HPCD Lab](https://www.hpcdlab.com)\n- [iFarm](https://ifarmproject.com)\n- [Kinoplan](http://kinoplan.io/)\n- Researchers at the [Moscow Institute of Physics and Technology](https://mipt.ru/english/)\n- [Neuromation](https://neuromation.io)\n- [Poteha Labs](https://potehalabs.com/en/)\n- [Provectus](https://provectus.com)\n- Researchers at the [Skolkovo Institute of Science and Technology](https://www.skoltech.ru/en)\n- [SoftConstruct](https://www.softconstruct.io/)\n- Researchers at [Tinkoff](https://www.tinkoff.ru/eng/)\n- Researchers at [Yandex.Research](https://research.yandex.com)\n\n\n### Citation\n\nPlease use this bibtex if you want to cite this repository in your publications:\n\n    @misc{catalyst,\n        author = {Kolesnikov, Sergey},\n        title = {Catalyst - Accelerated deep learning R\u0026D},\n        year = {2018},\n        publisher = {GitHub},\n        journal = {GitHub repository},\n        howpublished = {\\url{https://github.com/catalyst-team/catalyst}},\n    }\n","funding_links":["https://patreon.com/catalyst_team","https://opencollective.com/catalyst"],"categories":["The Data Science Toolbox","Reinforcement Learning (RL) and Deep Reinforcement Learning (DRL)","Libraries","Python","Deep Learning","Sensor Processing","Model and Data Versioning","Pytorch \u0026 related libraries｜Pytorch \u0026 相关库","\u003cspan id=\"head41\"\u003e3.5. Machine Learning and Deep Learning\u003c/span\u003e","Pytorch \u0026 related libraries","\u003e 1k ★","工作流程和实验跟踪","Model, Data and Experiment Tracking","其他_机器学习与深度学习","Researchers","2.For Experiment"],"sub_categories":["Deep Learning Packages","RL/DRL Algorithm Implementations and Software Frameworks","PyTorch","Machine Learning","Other libraries｜其他库:","\u003cspan id=\"head43\"\u003e3.5.2. Deep Learning\u003c/span\u003e","Other libraries:","General-Purpose Machine Learning","Frameworks","Experiments management"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcatalyst-team%2Fcatalyst","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fcatalyst-team%2Fcatalyst","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcatalyst-team%2Fcatalyst/lists"}