{"id":13487060,"url":"https://github.com/pytorch/ignite","last_synced_at":"2025-05-13T20:04:38.739Z","repository":{"id":37743069,"uuid":"111835796","full_name":"pytorch/ignite","owner":"pytorch","description":"High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.","archived":false,"fork":false,"pushed_at":"2025-05-05T14:27:58.000Z","size":56722,"stargazers_count":4655,"open_issues_count":161,"forks_count":648,"subscribers_count":59,"default_branch":"master","last_synced_at":"2025-05-06T18:39:32.120Z","etag":null,"topics":["closember","deep-learning","hacktoberfest","machine-learning","metrics","neural-network","python","pytorch"],"latest_commit_sha":null,"homepage":"https://pytorch-ignite.ai","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"bsd-3-clause","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/pytorch.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":".github/FUNDING.yml","license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":"CITATION","codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null},"funding":{"github":["vfdev-5"],"patreon":null,"open_collective":"pytorch-ignite","ko_fi":null,"tidelift":null,"community_bridge":null,"liberapay":null,"issuehunt":null,"otechie":null,"custom":null}},"created_at":"2017-11-23T17:31:21.000Z","updated_at":"2025-05-06T14:22:20.000Z","dependencies_parsed_at":"2023-10-04T13:28:44.942Z","dependency_job_id":"df5015d9-408b-4876-a5ee-91402d947b53","html_url":"https://github.com/pytorch/ignite","commit_stats":{"total_commits":1644,"total_committers":212,"mean_commits":7.754716981132075,"dds":0.6088807785888077,"last_synced_commit":"1ea147c50fa13631818a15e781eeb0b9bc25dd9b"},"previous_names":[],"tags_count":24,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pytorch%2Fignite","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pytorch%2Fignite/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pytorch%2Fignite/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pytorch%2Fignite/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/pytorch","download_url":"https://codeload.github.com/pytorch/ignite/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254020477,"owners_count":22000750,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["closember","deep-learning","hacktoberfest","machine-learning","metrics","neural-network","python","pytorch"],"created_at":"2024-07-31T18:00:54.900Z","updated_at":"2025-05-13T20:04:38.717Z","avatar_url":"https://github.com/pytorch.png","language":"Python","readme":"\u003cdiv align=\"center\"\u003e\n\n\u003c!-- ![Ignite Logo](assets/logo/ignite_logo_mixed.svg) --\u003e\n\n\u003cimg src=\"assets/logo/ignite_logo_mixed.svg\" width=512\u003e\n\n\u003c!-- [![image](https://travis-ci.com/pytorch/ignite.svg?branch=master)](https://travis-ci.com/pytorch/ignite) --\u003e\n\n| ![image](https://img.shields.io/badge/-Tests:-black?style=flat-square) [![image](https://github.com/pytorch/ignite/actions/workflows/unit-tests.yml/badge.svg?branch=master)](https://github.com/pytorch/ignite/actions/workflows/unit-tests.yml) [![image](https://github.com/pytorch/ignite/actions/workflows/gpu-tests.yml/badge.svg)](https://github.com/pytorch/ignite/actions/workflows/gpu-tests.yml) [![image](https://codecov.io/gh/pytorch/ignite/branch/master/graph/badge.svg)](https://codecov.io/gh/pytorch/ignite) [![image](https://img.shields.io/badge/dynamic/json.svg?label=docs\u0026url=https%3A%2F%2Fpypi.org%2Fpypi%2Fpytorch-ignite%2Fjson\u0026query=%24.info.version\u0026colorB=brightgreen\u0026prefix=v)](https://pytorch.org/ignite/index.html) |\n|:---\n| ![image](https://img.shields.io/badge/-Stable%20Releases:-black?style=flat-square) [![image](https://anaconda.org/pytorch/ignite/badges/version.svg)](https://anaconda.org/pytorch/ignite) ・ [![image](https://img.shields.io/badge/dynamic/json.svg?label=PyPI\u0026url=https%3A%2F%2Fpypi.org%2Fpypi%2Fpytorch-ignite%2Fjson\u0026query=%24.info.version\u0026colorB=brightgreen\u0026prefix=v)](https://pypi.org/project/pytorch-ignite/) [![image](https://static.pepy.tech/badge/pytorch-ignite)](https://pepy.tech/project/pytorch-ignite) ・ [![image](https://img.shields.io/badge/docker-hub-blue)](https://hub.docker.com/u/pytorchignite) |\n| ![image](https://img.shields.io/badge/-Nightly%20Releases:-black?style=flat-square) [![image](https://anaconda.org/pytorch-nightly/ignite/badges/version.svg)](https://anaconda.org/pytorch-nightly/ignite) [![image](https://img.shields.io/badge/PyPI-pre%20releases-brightgreen)](https://pypi.org/project/pytorch-ignite/#history)|\n| ![image](https://img.shields.io/badge/-Community:-black?style=flat-square) [![Twitter](https://img.shields.io/badge/news-twitter-blue)](https://twitter.com/pytorch_ignite) [![discord](https://img.shields.io/badge/chat-discord-blue?logo=discord)](https://discord.gg/djZtm3EmKj) [![numfocus](https://img.shields.io/badge/NumFOCUS-affiliated%20project-green)](https://numfocus.org/sponsored-projects/affiliated-projects) |\n| ![image](https://img.shields.io/badge/-Supported_PyTorch/Python_versions:-black?style=flat-square) [![link](https://img.shields.io/badge/-check_here-blue)](https://github.com/pytorch/ignite/actions?query=workflow%3A%22PyTorch+version+tests%22)|\n\n\u003c/div\u003e\n\n## TL;DR\n\nIgnite is a high-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.\n\n\u003cdiv align=\"center\"\u003e\n\n\u003ca href=\"https://colab.research.google.com/github/pytorch/ignite/blob/master/assets/tldr/teaser.ipynb\"\u003e\n \u003cimg alt=\"PyTorch-Ignite teaser\"\n      src=\"assets/tldr/pytorch-ignite-teaser.gif\"\n      width=532\u003e\n\u003c/a\u003e\n\n_Click on the image to see complete code_\n\n\u003c/div\u003e\n\n### Features\n\n- [Less code than pure PyTorch](https://raw.githubusercontent.com/pytorch/ignite/master/assets/ignite_vs_bare_pytorch.png)\n  while ensuring maximum control and simplicity\n\n- Library approach and no program's control inversion - _Use ignite where and when you need_\n\n- Extensible API for metrics, experiment managers, and other components\n\n\u003c!-- ############################################################################################################### --\u003e\n\n# Table of Contents\n\n- [Table of Contents](#table-of-contents)\n- [Why Ignite?](#why-ignite)\n  - [Simplified training and validation loop](#simplified-training-and-validation-loop)\n  - [Power of Events \u0026 Handlers](#power-of-events--handlers)\n    - [Execute any number of functions whenever you wish](#execute-any-number-of-functions-whenever-you-wish)\n    - [Built-in events filtering](#built-in-events-filtering)\n    - [Stack events to share some actions](#stack-events-to-share-some-actions)\n    - [Custom events to go beyond standard events](#custom-events-to-go-beyond-standard-events)\n  - [Out-of-the-box metrics](#out-of-the-box-metrics)\n- [Installation](#installation)\n  - [Nightly releases](#nightly-releases)\n  - [Docker Images](#docker-images)\n    - [Using pre-built images](#using-pre-built-images)\n- [Getting Started](#getting-started)\n- [Documentation](#documentation)\n  - [Additional Materials](#additional-materials)\n- [Examples](#examples)\n  - [Tutorials](#tutorials)\n  - [Reproducible Training Examples](#reproducible-training-examples)\n- [Communication](#communication)\n  - [User feedback](#user-feedback)\n- [Contributing](#contributing)\n- [Projects using Ignite](#projects-using-ignite)\n- [Citing Ignite](#citing-ignite)\n- [About the team \u0026 Disclaimer](#about-the-team--disclaimer)\n\n\u003c!-- ############################################################################################################### --\u003e\n\n# Why Ignite?\n\nIgnite is a **library** that provides three high-level features:\n\n- Extremely simple engine and event system\n- Out-of-the-box metrics to easily evaluate models\n- Built-in handlers to compose training pipeline, save artifacts and log parameters and metrics\n\n## Simplified training and validation loop\n\nNo more coding `for/while` loops on epochs and iterations. Users instantiate engines and run them.\n\n\u003cdetails\u003e\n\u003csummary\u003e\nExample\n\u003c/summary\u003e\n\n```python\nfrom ignite.engine import Engine, Events, create_supervised_evaluator\nfrom ignite.metrics import Accuracy\n\n\n# Setup training engine:\ndef train_step(engine, batch):\n    # Users can do whatever they need on a single iteration\n    # Eg. forward/backward pass for any number of models, optimizers, etc\n    # ...\n\ntrainer = Engine(train_step)\n\n# Setup single model evaluation engine\nevaluator = create_supervised_evaluator(model, metrics={\"accuracy\": Accuracy()})\n\ndef validation():\n    state = evaluator.run(validation_data_loader)\n    # print computed metrics\n    print(trainer.state.epoch, state.metrics)\n\n# Run model's validation at the end of each epoch\ntrainer.add_event_handler(Events.EPOCH_COMPLETED, validation)\n\n# Start the training\ntrainer.run(training_data_loader, max_epochs=100)\n```\n\n\u003c/details\u003e\n\n## Power of Events \u0026 Handlers\n\nThe cool thing with handlers is that they offer unparalleled flexibility (compared to, for example, callbacks). Handlers can be any function: e.g. lambda, simple function, class method, etc. Thus, we do not require to inherit from an interface and override its abstract methods which could unnecessarily bulk up your code and its complexity.\n\n### Execute any number of functions whenever you wish\n\n\u003cdetails\u003e\n\u003csummary\u003e\nExamples\n\u003c/summary\u003e\n\n```python\ntrainer.add_event_handler(Events.STARTED, lambda _: print(\"Start training\"))\n\n# attach handler with args, kwargs\nmydata = [1, 2, 3, 4]\nlogger = ...\n\ndef on_training_ended(data):\n    print(f\"Training is ended. mydata={data}\")\n    # User can use variables from another scope\n    logger.info(\"Training is ended\")\n\n\ntrainer.add_event_handler(Events.COMPLETED, on_training_ended, mydata)\n# call any number of functions on a single event\ntrainer.add_event_handler(Events.COMPLETED, lambda engine: print(engine.state.times))\n\n@trainer.on(Events.ITERATION_COMPLETED)\ndef log_something(engine):\n    print(engine.state.output)\n```\n\n\u003c/details\u003e\n\n### Built-in events filtering\n\n\u003cdetails\u003e\n\u003csummary\u003e\nExamples\n\u003c/summary\u003e\n\n```python\n# run the validation every 5 epochs\n@trainer.on(Events.EPOCH_COMPLETED(every=5))\ndef run_validation():\n    # run validation\n\n# change some training variable once on 20th epoch\n@trainer.on(Events.EPOCH_STARTED(once=20))\ndef change_training_variable():\n    # ...\n\n# Trigger handler with customly defined frequency\n@trainer.on(Events.ITERATION_COMPLETED(event_filter=first_x_iters))\ndef log_gradients():\n    # ...\n```\n\n\u003c/details\u003e\n\n### Stack events to share some actions\n\n\u003cdetails\u003e\n\u003csummary\u003e\nExamples\n\u003c/summary\u003e\n\nEvents can be stacked together to enable multiple calls:\n\n```python\n@trainer.on(Events.COMPLETED | Events.EPOCH_COMPLETED(every=10))\ndef run_validation():\n    # ...\n```\n\n\u003c/details\u003e\n\n### Custom events to go beyond standard events\n\n\u003cdetails\u003e\n\u003csummary\u003e\nExamples\n\u003c/summary\u003e\n\nCustom events related to backward and optimizer step calls:\n\n```python\nfrom ignite.engine import EventEnum\n\n\nclass BackpropEvents(EventEnum):\n    BACKWARD_STARTED = 'backward_started'\n    BACKWARD_COMPLETED = 'backward_completed'\n    OPTIM_STEP_COMPLETED = 'optim_step_completed'\n\ndef update(engine, batch):\n    # ...\n    loss = criterion(y_pred, y)\n    engine.fire_event(BackpropEvents.BACKWARD_STARTED)\n    loss.backward()\n    engine.fire_event(BackpropEvents.BACKWARD_COMPLETED)\n    optimizer.step()\n    engine.fire_event(BackpropEvents.OPTIM_STEP_COMPLETED)\n    # ...\n\ntrainer = Engine(update)\ntrainer.register_events(*BackpropEvents)\n\n@trainer.on(BackpropEvents.BACKWARD_STARTED)\ndef function_before_backprop(engine):\n    # ...\n```\n\n- Complete snippet is found [here](https://pytorch.org/ignite/faq.html#creating-custom-events-based-on-forward-backward-pass).\n- Another use-case of custom events: [trainer for Truncated Backprop Through Time](https://pytorch.org/ignite/contrib/engines.html#ignite.contrib.engines.create_supervised_tbptt_trainer).\n\n\u003c/details\u003e\n\n## Out-of-the-box metrics\n\n- [Metrics](https://pytorch.org/ignite/metrics.html#complete-list-of-metrics) for various tasks:\n  Precision, Recall, Accuracy, Confusion Matrix, IoU etc, ~20 [regression metrics](https://pytorch.org/ignite/metrics.html#complete-list-of-metrics).\n\n- Users can also [compose their metrics](https://pytorch.org/ignite/metrics.html#metric-arithmetics) with ease from\n  existing ones using arithmetic operations or torch methods.\n\n\u003cdetails\u003e\n\u003csummary\u003e\nExample\n\u003c/summary\u003e\n\n```python\nprecision = Precision(average=False)\nrecall = Recall(average=False)\nF1_per_class = (precision * recall * 2 / (precision + recall))\nF1_mean = F1_per_class.mean()  # torch mean method\nF1_mean.attach(engine, \"F1\")\n```\n\n\u003c/details\u003e\n\n\u003c!-- ############################################################################################################### --\u003e\n\n# Installation\n\nFrom [pip](https://pypi.org/project/pytorch-ignite/):\n\n```bash\npip install pytorch-ignite\n```\n\nFrom [conda](https://anaconda.org/pytorch/ignite):\n\n```bash\nconda install ignite -c pytorch\n```\n\nFrom source:\n\n```bash\npip install git+https://github.com/pytorch/ignite\n```\n\n## Nightly releases\n\nFrom pip:\n\n```bash\npip install --pre pytorch-ignite\n```\n\nFrom conda (this suggests to install [pytorch nightly release](https://anaconda.org/pytorch-nightly/pytorch) instead of stable\nversion as dependency):\n\n```bash\nconda install ignite -c pytorch-nightly\n```\n\n## Docker Images\n\n### Using pre-built images\n\nPull a pre-built docker image from [our Docker Hub](https://hub.docker.com/u/pytorchignite) and run it with docker v19.03+.\n\n```bash\ndocker run --gpus all -it -v $PWD:/workspace/project --network=host --shm-size 16G pytorchignite/base:latest /bin/bash\n```\n\n\u003cdetails\u003e\n\n\u003csummary\u003e\nList of available pre-built images\n\u003c/summary\u003e\n\nBase\n\n- `pytorchignite/base:latest`\n- `pytorchignite/apex:latest`\n- `pytorchignite/hvd-base:latest`\n- `pytorchignite/hvd-apex:latest`\n- `pytorchignite/msdp-apex:latest`\n\nVision:\n\n- `pytorchignite/vision:latest`\n- `pytorchignite/hvd-vision:latest`\n- `pytorchignite/apex-vision:latest`\n- `pytorchignite/hvd-apex-vision:latest`\n- `pytorchignite/msdp-apex-vision:latest`\n\nNLP:\n\n- `pytorchignite/nlp:latest`\n- `pytorchignite/hvd-nlp:latest`\n- `pytorchignite/apex-nlp:latest`\n- `pytorchignite/hvd-apex-nlp:latest`\n- `pytorchignite/msdp-apex-nlp:latest`\n\n\u003c/details\u003e\n\nFor more details, see [here](docker).\n\n\u003c!-- ############################################################################################################### --\u003e\n\n# Getting Started\n\nFew pointers to get you started:\n\n- [Quick Start Guide: Essentials of getting a project up and running](https://pytorch-ignite.ai/tutorials/beginner/01-getting-started/)\n- [Concepts of the library: Engine, Events \u0026 Handlers, State, Metrics](https://pytorch-ignite.ai/concepts/)\n- Full-featured template examples (coming soon)\n\n\u003c!-- ############################################################################################################### --\u003e\n\n# Documentation\n\n- Stable API documentation and an overview of the library: https://pytorch.org/ignite/\n- Development version API documentation: https://pytorch.org/ignite/master/\n- [FAQ](https://pytorch.org/ignite/faq.html),\n  [\"Questions on Github\"](https://github.com/pytorch/ignite/issues?q=is%3Aissue+label%3Aquestion+) and\n  [\"Questions on Discuss.PyTorch\"](https://discuss.pytorch.org/c/ignite).\n- [Project's Roadmap](https://github.com/pytorch/ignite/wiki/Roadmap)\n\n## Additional Materials\n\n- [Distributed Training Made Easy with PyTorch-Ignite](https://labs.quansight.org/blog/2021/06/distributed-made-easy-with-ignite/)\n- [PyTorch Ecosystem Day 2021 Breakout session presentation](https://colab.research.google.com/drive/1qhUgWQ0N2U71IVShLpocyeY4AhlDCPRd)\n- [Tutorial blog post about PyTorch-Ignite](https://labs.quansight.org/blog/2020/09/pytorch-ignite/)\n- [8 Creators and Core Contributors Talk About Their Model Training Libraries From PyTorch Ecosystem](https://neptune.ai/blog/model-training-libraries-pytorch-ecosystem?utm_source=reddit\u0026utm_medium=post\u0026utm_campaign=blog-model-training-libraries-pytorch-ecosystem)\n- Ignite Posters from Pytorch Developer Conferences:\n  - [2021](https://drive.google.com/file/d/1YXrkJIepPk_KltSG1ZfWRtA5IRgPFz_U)\n  - [2019](https://drive.google.com/open?id=1bqIl-EM6GCCCoSixFZxhIbuF25F2qTZg)\n  - [2018](https://drive.google.com/open?id=1_2vzBJ0KeCjGv1srojMHiJRvceSVbVR5)\n\n\u003c!-- ############################################################################################################### --\u003e\n\n# Examples\n\n## Tutorials\n\n- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pytorch/ignite/blob/master/examples/notebooks/TextCNN.ipynb) [Text Classification using Convolutional Neural\n  Networks](https://github.com/pytorch/ignite/blob/master/examples/notebooks/TextCNN.ipynb)\n- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pytorch/ignite/blob/master/examples/notebooks/VAE.ipynb) [Variational Auto\n  Encoders](https://github.com/pytorch/ignite/blob/master/examples/notebooks/VAE.ipynb)\n- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pytorch/ignite/blob/master/examples/notebooks/FashionMNIST.ipynb) [Convolutional Neural Networks for Classifying Fashion-MNIST\n  Dataset](https://github.com/pytorch/ignite/blob/master/examples/notebooks/FashionMNIST.ipynb)\n- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pytorch/ignite/blob/master/examples/notebooks/CycleGAN_with_nvidia_apex.ipynb) [Training Cycle-GAN on Horses to\n  Zebras with Nvidia/Apex](https://github.com/pytorch/ignite/blob/master/examples/notebooks/CycleGAN_with_nvidia_apex.ipynb) - [ logs on W\u0026B](https://app.wandb.ai/vfdev-5/ignite-cyclegan-apex)\n- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pytorch/ignite/blob/master/examples/notebooks/CycleGAN_with_torch_cuda_amp.ipynb) [Another training Cycle-GAN on Horses to\n  Zebras with Native Torch CUDA AMP](https://github.com/pytorch/ignite/blob/master/examples/notebooks/CycleGAN_with_torch_cuda_amp.ipynb) - [logs on W\u0026B](https://app.wandb.ai/vfdev-5/ignite-cyclegan-torch-amp)\n- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pytorch/ignite/blob/master/examples/notebooks/EfficientNet_Cifar100_finetuning.ipynb) [Finetuning EfficientNet-B0 on\n  CIFAR100](https://github.com/pytorch/ignite/blob/master/examples/notebooks/EfficientNet_Cifar100_finetuning.ipynb)\n- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pytorch/ignite/blob/master/examples/notebooks/Cifar10_Ax_hyperparam_tuning.ipynb) [Hyperparameters tuning with\n  Ax](https://github.com/pytorch/ignite/blob/master/examples/notebooks/Cifar10_Ax_hyperparam_tuning.ipynb)\n- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pytorch/ignite/blob/master/examples/notebooks/FastaiLRFinder_MNIST.ipynb) [Basic example of LR finder on\n  MNIST](https://github.com/pytorch/ignite/blob/master/examples/notebooks/FastaiLRFinder_MNIST.ipynb)\n- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pytorch/ignite/blob/master/examples/notebooks/Cifar100_bench_amp.ipynb) [Benchmark mixed precision training on Cifar100:\n  torch.cuda.amp vs nvidia/apex](https://github.com/pytorch/ignite/blob/master/examples/notebooks/Cifar100_bench_amp.ipynb)\n- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pytorch/ignite/blob/master/examples/notebooks/MNIST_on_TPU.ipynb) [MNIST training on a single\n  TPU](https://github.com/pytorch/ignite/blob/master/examples/notebooks/MNIST_on_TPU.ipynb)\n- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1E9zJrptnLJ_PKhmaP5Vhb6DTVRvyrKHx) [CIFAR10 Training on multiple TPUs](https://github.com/pytorch/ignite/tree/master/examples/cifar10)\n- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pytorch/ignite/blob/master/examples/notebooks/HandlersTimeProfiler_MNIST.ipynb) [Basic example of handlers\n  time profiling on MNIST training example](https://github.com/pytorch/ignite/blob/master/examples/notebooks/HandlersTimeProfiler_MNIST.ipynb)\n\n## Reproducible Training Examples\n\nInspired by [torchvision/references](https://github.com/pytorch/vision/tree/master/references),\nwe provide several reproducible baselines for vision tasks:\n\n- [ImageNet](examples/references/classification/imagenet) - logs on Ignite Trains server coming soon ...\n- [Pascal VOC2012](examples/references/segmentation/pascal_voc2012) - logs on Ignite Trains server coming soon ...\n\nFeatures:\n\n- Distributed training: native or horovod and using [PyTorch native AMP](https://pytorch.org/docs/stable/notes/amp_examples.html)\n\n## Code-Generator application\n\nThe easiest way to create your training scripts with PyTorch-Ignite:\n\n- https://code-generator.pytorch-ignite.ai/\n\n\u003c!-- ############################################################################################################### --\u003e\n\n# Communication\n\n- [GitHub issues](https://github.com/pytorch/ignite/issues): questions, bug reports, feature requests, etc.\n\n- [Discuss.PyTorch](https://discuss.pytorch.org/c/ignite), category \"Ignite\".\n\n- [PyTorch-Ignite Discord Server](https://discord.gg/djZtm3EmKj): to chat with the community\n\n- [GitHub Discussions](https://github.com/pytorch/ignite/discussions): general library-related discussions, ideas, Q\u0026A, etc.\n\n## User feedback\n\nWe have created a form for [\"user feedback\"](https://github.com/pytorch/ignite/issues/new/choose). We\nappreciate any type of feedback, and this is how we would like to see our\ncommunity:\n\n- If you like the project and want to say thanks, this the right\n  place.\n- If you do not like something, please, share it with us, and we can\n  see how to improve it.\n\nThank you!\n\n\u003c!-- ############################################################################################################### --\u003e\n\n# Contributing\n\nPlease see the [contribution guidelines](https://github.com/pytorch/ignite/blob/master/CONTRIBUTING.md) for more information.\n\nAs always, PRs are welcome :)\n\n\u003c!-- ############################################################################################################### --\u003e\n\n# Projects using Ignite\n\n\u003cdetails\u003e\n\n\u003csummary\u003e\nResearch papers\n\u003c/summary\u003e\n\n- [BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning](https://github.com/BlackHC/BatchBALD)\n- [A Model to Search for Synthesizable Molecules](https://github.com/john-bradshaw/molecule-chef)\n- [Localised Generative Flows](https://github.com/jrmcornish/lgf)\n- [Extracting T Cell Function and Differentiation Characteristics from the Biomedical Literature](https://github.com/hammerlab/t-cell-relation-extraction)\n- [Variational Information Distillation for Knowledge Transfer](https://github.com/amzn/xfer/tree/master/var_info_distil)\n- [XPersona: Evaluating Multilingual Personalized Chatbot](https://github.com/HLTCHKUST/Xpersona)\n- [CNN-CASS: CNN for Classification of Coronary Artery Stenosis Score in MPR Images](https://github.com/ucuapps/CoronaryArteryStenosisScoreClassification)\n- [Bridging Text and Video: A Universal Multimodal Transformer for Video-Audio Scene-Aware Dialog](https://github.com/ictnlp/DSTC8-AVSD)\n- [Adversarial Decomposition of Text Representation](https://github.com/text-machine-lab/adversarial_decomposition)\n- [Uncertainty Estimation Using a Single Deep Deterministic Neural Network](https://github.com/y0ast/deterministic-uncertainty-quantification)\n- [DeepSphere: a graph-based spherical CNN](https://github.com/deepsphere/deepsphere-pytorch)\n- [Norm-in-Norm Loss with Faster Convergence and Better Performance for Image Quality Assessment](https://github.com/lidq92/LinearityIQA)\n- [Unified Quality Assessment of In-the-Wild Videos with Mixed Datasets Training](https://github.com/lidq92/MDTVSFA)\n- [Deep Signature Transforms](https://github.com/patrick-kidger/Deep-Signature-Transforms)\n- [Neural CDEs for Long Time-Series via the Log-ODE Method](https://github.com/jambo6/neuralCDEs-via-logODEs)\n- [Volumetric Grasping Network](https://github.com/ethz-asl/vgn)\n- [Mood Classification using Listening Data](https://github.com/fdlm/listening-moods)\n- [Deterministic Uncertainty Estimation (DUE)](https://github.com/y0ast/DUE)\n- [PyTorch-Hebbian: facilitating local learning in a deep learning framework](https://github.com/Joxis/pytorch-hebbian)\n- [Stochastic Weight Matrix-Based Regularization Methods for Deep Neural Networks](https://github.com/rpatrik96/lod-wmm-2019)\n- [Learning explanations that are hard to vary](https://github.com/gibipara92/learning-explanations-hard-to-vary)\n- [The role of disentanglement in generalisation](https://github.com/mmrl/disent-and-gen)\n- [A Probabilistic Programming Approach to Protein Structure Superposition](https://github.com/LysSanzMoreta/Theseus-PP)\n- [PadChest: A large chest x-ray image dataset with multi-label annotated reports](https://github.com/auriml/Rx-thorax-automatic-captioning)\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\n\u003csummary\u003e\nBlog articles, tutorials, books\n\u003c/summary\u003e\n\n- [State-of-the-Art Conversational AI with Transfer Learning](https://github.com/huggingface/transfer-learning-conv-ai)\n- [Tutorial on Transfer Learning in NLP held at NAACL 2019](https://github.com/huggingface/naacl_transfer_learning_tutorial)\n- [Deep-Reinforcement-Learning-Hands-On-Second-Edition, published by Packt](https://github.com/PacktPublishing/Deep-Reinforcement-Learning-Hands-On-Second-Edition)\n- [Once Upon a Repository: How to Write Readable, Maintainable Code with PyTorch](https://towardsdatascience.com/once-upon-a-repository-how-to-write-readable-maintainable-code-with-pytorch-951f03f6a829)\n- [The Hero Rises: Build Your Own SSD](https://allegro.ai/blog/the-hero-rises-build-your-own-ssd/)\n- [Using Optuna to Optimize PyTorch Ignite Hyperparameters](https://medium.com/pytorch/using-optuna-to-optimize-pytorch-ignite-hyperparameters-626ffe6d4783)\n- [PyTorch Ignite - Classifying Tiny ImageNet with EfficientNet](https://towardsdatascience.com/pytorch-ignite-classifying-tiny-imagenet-with-efficientnet-e5b1768e5e8f)\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\n\u003csummary\u003e\nToolkits\n\u003c/summary\u003e\n\n- [Project MONAI - AI Toolkit for Healthcare Imaging](https://github.com/Project-MONAI/MONAI)\n- [DeepSeismic - Deep Learning for Seismic Imaging and Interpretation](https://github.com/microsoft/seismic-deeplearning)\n- [Nussl - a flexible, object-oriented Python audio source separation library](https://github.com/nussl/nussl)\n- [PyTorch Adapt - A fully featured and modular domain adaptation library](https://github.com/KevinMusgrave/pytorch-adapt)\n- [gnina-torch: PyTorch implementation of GNINA scoring function](https://github.com/RMeli/gnina-torch)\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\n\u003csummary\u003e\nOthers\n\u003c/summary\u003e\n\n- [Implementation of \"Attention is All You Need\" paper](https://github.com/akurniawan/pytorch-transformer)\n- [Implementation of DropBlock: A regularization method for convolutional networks in PyTorch](https://github.com/miguelvr/dropblock)\n- [Kaggle Kuzushiji Recognition: 2nd place solution](https://github.com/lopuhin/kaggle-kuzushiji-2019)\n- [Unsupervised Data Augmentation experiments in PyTorch](https://github.com/vfdev-5/UDA-pytorch)\n- [Hyperparameters tuning with Optuna](https://github.com/optuna/optuna-examples/blob/main/pytorch/pytorch_ignite_simple.py)\n- [Logging with ChainerUI](https://chainerui.readthedocs.io/en/latest/reference/module.html#external-library-support)\n- [FixMatch experiments in PyTorch and Ignite (CTA dataaug policy)](https://github.com/vfdev-5/FixMatch-pytorch)\n- [Kaggle Birdcall Identification Competition: 1st place solution](https://github.com/ryanwongsa/kaggle-birdsong-recognition)\n- [Logging with Aim - An open-source experiment tracker](https://aimstack.readthedocs.io/en/latest/quick_start/integrations.html#integration-with-pytorch-ignite)\n\n\u003c/details\u003e\n\nSee other projects at [\"Used by\"](https://github.com/pytorch/ignite/network/dependents?package_id=UGFja2FnZS02NzI5ODEwNA%3D%3D)\n\nIf your project implements a paper, represents other use-cases not\ncovered in our official tutorials, Kaggle competition's code, or just\nyour code presents interesting results and uses Ignite. We would like to\nadd your project to this list, so please send a PR with brief\ndescription of the project.\n\n\u003c!-- ############################################################################################################### --\u003e\n\n# Citing Ignite\n\nIf you use PyTorch-Ignite in a scientific publication, we would appreciate citations to our project.\n\n```\n@misc{pytorch-ignite,\n  author = {V. Fomin and J. Anmol and S. Desroziers and J. Kriss and A. Tejani},\n  title = {High-level library to help with training neural networks in PyTorch},\n  year = {2020},\n  publisher = {GitHub},\n  journal = {GitHub repository},\n  howpublished = {\\url{https://github.com/pytorch/ignite}},\n}\n```\n\n\u003c!-- ############################################################################################################### --\u003e\n\n# About the team \u0026 Disclaimer\n\nPyTorch-Ignite is a [NumFOCUS Affiliated Project](https://www.numfocus.org/), operated and maintained by volunteers in the PyTorch community in their capacities as individuals\n(and not as representatives of their employers). See the [\"About us\"](https://pytorch-ignite.ai/about/community/#about-us)\npage for a list of core contributors. For usage questions and issues, please see the various channels\n[here](#communication). For all other questions and inquiries, please send an email\nto contact@pytorch-ignite.ai.\n","funding_links":["https://github.com/sponsors/vfdev-5","https://opencollective.com/pytorch-ignite"],"categories":["The Data Science Toolbox","Neural Networks (NN) and Deep Neural Networks (DNN)","Python","Deep Learning","Deep Learning Framework","Pytorch \u0026 related libraries","General","深度学习工具","Pytorch \u0026 related libraries｜Pytorch \u0026 相关库","其他_机器学习与深度学习","Model Training and Orchestration","机器学习框架","Deep Learning Tools","Reinforcement Learning","Repos","🤖 Machine Learning \u0026 AI"],"sub_categories":["Deep Learning Packages","NN/DNN Software Frameworks","PyTorch","High-Level DL APIs","Other libraries:","Other libraries｜其他库:","Inverse Reinforcement Learning","Tools"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fpytorch%2Fignite","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fpytorch%2Fignite","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fpytorch%2Fignite/lists"}