{"id":40977888,"url":"https://github.com/koba-jon/pytorch_cpp","last_synced_at":"2026-01-22T07:05:52.714Z","repository":{"id":43101472,"uuid":"249339575","full_name":"koba-jon/pytorch_cpp","owner":"koba-jon","description":"Deep Learning sample programs using PyTorch in C++","archived":false,"fork":false,"pushed_at":"2025-12-24T03:58:50.000Z","size":263016,"stargazers_count":306,"open_issues_count":5,"forks_count":58,"subscribers_count":13,"default_branch":"master","last_synced_at":"2025-12-24T21:19:16.499Z","etag":null,"topics":["anomaly-detection","autoencoder","convolutional-autoencoder","cpp","dagmm","dcgan","deep-learning","dimensionality-reduction","generative-modeling","image-to-image-translation","libtorch","linux","multiclass-classification","object-detection","pix2pix","pytorch","semantic-segmentation","u-net","vae","yolo"],"latest_commit_sha":null,"homepage":"","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/koba-jon.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2020-03-23T05:00:08.000Z","updated_at":"2025-12-15T03:12:24.000Z","dependencies_parsed_at":"2025-09-04T10:15:53.984Z","dependency_job_id":"bde64d71-9b01-49e4-b716-5cbdea3d429e","html_url":"https://github.com/koba-jon/pytorch_cpp","commit_stats":null,"previous_names":[],"tags_count":28,"template":false,"template_full_name":null,"purl":"pkg:github/koba-jon/pytorch_cpp","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/koba-jon%2Fpytorch_cpp","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/koba-jon%2Fpytorch_cpp/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/koba-jon%2Fpytorch_cpp/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/koba-jon%2Fpytorch_cpp/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/koba-jon","download_url":"https://codeload.github.com/koba-jon/pytorch_cpp/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/koba-jon%2Fpytorch_cpp/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28657629,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-22T01:17:37.254Z","status":"online","status_checked_at":"2026-01-22T02:00:07.137Z","response_time":144,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["anomaly-detection","autoencoder","convolutional-autoencoder","cpp","dagmm","dcgan","deep-learning","dimensionality-reduction","generative-modeling","image-to-image-translation","libtorch","linux","multiclass-classification","object-detection","pix2pix","pytorch","semantic-segmentation","u-net","vae","yolo"],"created_at":"2026-01-22T07:05:52.092Z","updated_at":"2026-01-22T07:05:52.709Z","avatar_url":"https://github.com/koba-jon.png","language":"C++","readme":"\u003cdiv align=\"center\"\u003e\n  \n# 🔥 PyTorch C++ Samples 🔥\n  \n[![Language](https://img.shields.io/badge/Language-C++-blue)]()\n[![LibTorch](https://img.shields.io/badge/LibTorch-2.10.0-orange)]()\n[![OS](https://img.shields.io/badge/OS-Ubuntu-yellow)]()\n[![OS](https://img.shields.io/badge/License-MIT-green)]()\n![sample1](sample1.png)\n![sample2](sample2.gif)\n\n\u003c/div\u003e\n\n\n\n## 🚀 Quick Start (Details: \u003ca href=\"#-requirement-library\"\u003eLibrary\u003c/a\u003e, \u003ca href=\"#-preparation-run\"\u003eRun\u003c/a\u003e)\nRequirements: `LibTorch`, `OpenCV`, `OpenMP`, `Boost`, `Gnuplot`, `libpng/png++/zlib` \u003cbr\u003e\n\n### 1. Git Clone\n\n~~~\n$ git clone https://github.com/koba-jon/pytorch_cpp.git\n$ cd pytorch_cpp\n$ sudo apt install g++-8\n~~~\n\n### 2. Run\n\n**(1) Change Directory** (Model: \u003ca href=\"Dimensionality_Reduction/AE1d\"\u003eAE1d\u003c/a\u003e)\n~~~\n$ cd Dimensionality_Reduction/AE1d\n~~~\n\n**(2) Build**\n~~~\n$ mkdir build\n$ cd build\n$ cmake ..\n$ make -j4\n$ cd ..\n~~~\n\n**(3) Dataset Setting** (Dataset: \u003ca href=\"https://huggingface.co/datasets/koba-jon/normal_distribution_dataset\"\u003eNormal Distribution Dataset\u003c/a\u003e)\n~~~\n$ cd datasets\n$ git clone https://huggingface.co/datasets/koba-jon/normal_distribution_dataset\n$ ln -s normal_distribution_dataset/NormalDistribution ./NormalDistribution\n$ cd ..\n~~~\n\n**(4) Training**\n~~~\n$ sh scripts/train.sh\n~~~\n\n**(5) Test**\n~~~\n$ sh scripts/test.sh\n~~~\n\n## 🔄 Updates (MM/DD/YYYY)\n\n01/22/2026: Release of `v2.10.0` \u003cbr\u003e\n12/22/2025: Implementation of `AdaIN` \u003cbr\u003e\n12/20/2025: Implementation of `NST` \u003cbr\u003e\n12/06/2025: Release of `v2.9.1.4` \u003cbr\u003e\n12/01/2025: Release of `v2.9.1.3` \u003cbr\u003e\n12/01/2025: Implementation of `PatchCore` \u003cbr\u003e\n11/29/2025: Release of `v2.9.1.2` \u003cbr\u003e\n11/29/2025: Implementation of `PaDiM` \u003cbr\u003e\n11/27/2025: Implementation of `WideResNet` \u003cbr\u003e\n11/27/2025: Release of `v2.9.1.1` \u003cbr\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eSee more...\u003c/summary\u003e\n  \n11/24/2025: Implementation of `ESRGAN` \u003cbr\u003e\n11/21/2025: Implementation of `SRGAN` \u003cbr\u003e\n11/19/2025: Implementation of `DiT` \u003cbr\u003e\n11/14/2025: Release of `v2.9.1` \u003cbr\u003e\n11/01/2025: Implementation of `NeRF` and `3DGS` \u003cbr\u003e\n10/16/2025: Release of `v2.9.0` \u003cbr\u003e\n10/16/2025: Implementation of `PixelSNAIL-Gray` and `PixelSNAIL-RGB` \u003cbr\u003e\n10/14/2025: Implementation of `YOLOv8` \u003cbr\u003e\n10/13/2025: Implementation of `YOLOv5` \u003cbr\u003e\n10/09/2025: Implementation of `RF2d` \u003cbr\u003e\n10/08/2025: Implementation of `FM2d` \u003cbr\u003e\n10/08/2025: Implementation of `LDM` \u003cbr\u003e\n10/04/2025: Implementation of `Glow` \u003cbr\u003e\n10/01/2025: Implementation of `Real-NVP2d` \u003cbr\u003e\n09/28/2025: Implementation of `Planar-Flow2d` and `Radial-Flow2d` \u003cbr\u003e\n09/25/2025: Release of `v2.8.0.2` \u003cbr\u003e\n09/22/2025: Implementation of `PixelCNN-Gray` and `PixelCNN-RGB` \u003cbr\u003e\n09/18/2025: Implementation of `VQ-VAE-2` \u003cbr\u003e\n09/16/2025: Implementation of `VQ-VAE` \u003cbr\u003e\n09/14/2025: Implementation of `PNDM2d` \u003cbr\u003e\n09/14/2025: Release of `v2.8.0.1` \u003cbr\u003e\n09/12/2025: Implementation of `SimCLR` \u003cbr\u003e\n09/11/2025: Implementation of `MAE` \u003cbr\u003e\n09/10/2025: Implementation of EMA for `DDPM2d` and `DDIM2d` \u003cbr\u003e\n09/08/2025: Implementation of `EfficientNet` \u003cbr\u003e\n09/07/2025: Implementation of `CycleGAN` \u003cbr\u003e\n09/05/2025: Implementation of `ViT` \u003cbr\u003e\n09/04/2025: Release of `v2.8.0` \u003cbr\u003e\n09/04/2025: Implementation of `DDIM2d` \u003cbr\u003e\n09/04/2025: Implementation of `DDPM2d` \u003cbr\u003e\n06/27/2023: Release of `v2.0.1` \u003cbr\u003e\n06/27/2023: Create the heatmap for Anomaly Detection \u003cbr\u003e\n05/07/2023: Release of `v2.0.0` \u003cbr\u003e\n03/01/2023: Release of `v1.13.1` \u003cbr\u003e\n09/12/2022: Release of `v1.12.1` \u003cbr\u003e\n08/04/2022: Release of `v1.12.0` \u003cbr\u003e\n03/18/2022: Release of `v1.11.0` \u003cbr\u003e\n02/10/2022: Release of `v1.10.2` \u003cbr\u003e\n02/09/2022: Implementation of `YOLOv3` \u003cbr\u003e\n01/09/2022: Release of `v1.10.1` \u003cbr\u003e\n01/09/2022: Fixed execution error in test on CPU package \u003cbr\u003e\n11/12/2021: Release of `v1.10.0` \u003cbr\u003e\n09/27/2021: Release of `v1.9.1` \u003cbr\u003e\n09/27/2021: Support for using different devices between training and test \u003cbr\u003e\n09/06/2021: Improved accuracy of time measurement using GPU \u003cbr\u003e\n06/19/2021: Release of `v1.9.0` \u003cbr\u003e\n03/29/2021: Release of `v1.8.1` \u003cbr\u003e\n03/18/2021: Implementation of `Discriminator` from DCGAN \u003cbr\u003e\n03/17/2021: Implementation of `AE1d` \u003cbr\u003e\n03/16/2021: Release of `v1.8.0` \u003cbr\u003e\n03/15/2021: Implementation of `YOLOv2` \u003cbr\u003e\n02/11/2021: Implementation of `YOLOv1` \u003cbr\u003e\n01/21/2021: Release of `v1.7.1` \u003cbr\u003e\n10/30/2020: Release of `v1.7.0` \u003cbr\u003e\n10/04/2020: Implementation of `Skip-GANomaly2d` \u003cbr\u003e\n10/03/2020: Implementation of `GANomaly2d` \u003cbr\u003e\n09/29/2020: Implementation of `EGBAD2d` \u003cbr\u003e\n09/28/2020: Implementation of `AnoGAN2d` \u003cbr\u003e\n09/27/2020: Implementation of `SegNet` \u003cbr\u003e\n09/26/2020: Implementation of `DAE2d` \u003cbr\u003e\n09/13/2020: Implementation of `ResNet` \u003cbr\u003e\n09/07/2020: Implementation of `VGGNet` \u003cbr\u003e\n09/05/2020: Implementation of `AlexNet` \u003cbr\u003e\n09/02/2020: Implementation of `WAE2d GAN` and `WAE2d MMD` \u003cbr\u003e\n08/30/2020: Release of `v1.6.0` \u003cbr\u003e\n06/26/2020: Implementation of `DAGMM2d` \u003cbr\u003e\n06/26/2020: Release of `v1.5.1` \u003cbr\u003e\n06/26/2020: Implementation of `VAE2d` and `DCGAN` \u003cbr\u003e\n06/01/2020: Implementation of `Pix2Pix` \u003cbr\u003e\n05/29/2020: Implementation of `U-Net Classification` \u003cbr\u003e\n05/26/2020: Implementation of `U-Net Regression` \u003cbr\u003e\n04/24/2020: Release of `v1.5.0` \u003cbr\u003e\n03/23/2020: Implementation of `AE2d` \u003cbr\u003e\n  \n\u003c/details\u003e\n\n\n## 🏗️ Implementation\n\n### 📊 Multiclass Classification\n  \n\u003ctable\u003e\n  \u003ctr\u003e\n    \u003cth\u003eCategory\u003c/th\u003e\n    \u003cth\u003eModel\u003c/th\u003e\n    \u003cth\u003ePaper\u003c/th\u003e\n    \u003cth\u003eConference/Journal\u003c/th\u003e\n    \u003cth\u003eCode\u003c/th\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd rowspan=\"6\"\u003eCNNs\u003c/td\u003e\n    \u003ctd\u003eAlexNet\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networ\"\u003eA. Krizhevsky et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eNeurIPS 2012\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Multiclass_Classification/AlexNet\"\u003eAlexNet\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eVGGNet\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://arxiv.org/abs/1409.1556\"\u003eK. Simonyan et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eICLR 2015\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Multiclass_Classification/VGGNet\"\u003eVGGNet\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eResNet\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://openaccess.thecvf.com/content_cvpr_2016/html/He_Deep_Residual_Learning_CVPR_2016_paper.html\"\u003eK. He et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eCVPR 2016\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Multiclass_Classification/ResNet\"\u003eResNet\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eWideResNet\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://arxiv.org/abs/1605.07146\"\u003eS. Zagoruyko et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003earXiv 2016\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Multiclass_Classification/WideResNet\"\u003eWideResNet\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eDiscriminator\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://arxiv.org/abs/1511.06434\"\u003eA. Radford et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eICLR 2016\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Multiclass_Classification/Discriminator\"\u003eDiscriminator\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eEfficientNet\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://proceedings.mlr.press/v97/tan19a.html?ref=ji\"\u003eM. Tan et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eICML 2019\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Multiclass_Classification/EfficientNet\"\u003eEfficientNet\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd rowspan=\"1\"\u003eTransformers\u003c/td\u003e\n    \u003ctd\u003eVision Transformer\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://arxiv.org/abs/2010.11929\"\u003eA. Dosovitskiy et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eICLR 2021\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Multiclass_Classification/ViT\"\u003eViT\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n\u003c/table\u003e\n  \n### 🔽 Dimensionality Reduction\n\n\u003ctable\u003e\n  \u003ctr\u003e\n    \u003cth\u003eModel\u003c/th\u003e\n    \u003cth\u003ePaper\u003c/th\u003e\n    \u003cth\u003eConference/Journal\u003c/th\u003e\n    \u003cth\u003eCode\u003c/th\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd rowspan=\"2\"\u003eAutoencoder\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003e\u003ca href=\"https://science.sciencemag.org/content/313/5786/504.abstract\"\u003eG. E. Hinton et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003eScience 2006\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Dimensionality_Reduction/AE1d\"\u003eAE1d\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"Dimensionality_Reduction/AE2d\"\u003eAE2d\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eDenoising Autoencoder\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://dl.acm.org/doi/abs/10.1145/1390156.1390294\"\u003eP. Vincent et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eICML 2008\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Dimensionality_Reduction/DAE2d\"\u003eDAE2d\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n\u003c/table\u003e\n\n\n### 🎨 Generative Modeling\n\n\u003ctable\u003e\n  \u003ctr\u003e\n    \u003cth\u003eCategory\u003c/th\u003e\n    \u003cth\u003eModel\u003c/th\u003e\n    \u003cth\u003ePaper\u003c/th\u003e\n    \u003cth\u003eConference/Journal\u003c/th\u003e\n    \u003cth\u003eCode\u003c/th\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd rowspan=\"5\"\u003eVAEs\u003c/td\u003e\n    \u003ctd\u003eVariational Autoencoder\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://arxiv.org/abs/1312.6114\"\u003eD. P. Kingma et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eICLR 2014\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Generative_Modeling/VAE2d\"\u003eVAE2d\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd rowspan=\"2\"\u003eWasserstein Autoencoder\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003e\u003ca href=\"https://openreview.net/forum?id=HkL7n1-0b\"\u003eI. Tolstikhin et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003eICLR 2018\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Generative_Modeling/WAE2d_GAN\"\u003eWAE2d GAN\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"Generative_Modeling/WAE2d_MMD\"\u003eWAE2d MMD\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eVQ-VAE\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://proceedings.neurips.cc/paper/2017/hash/7a98af17e63a0ac09ce2e96d03992fbc-Abstract.html\"\u003eA. v. d. Oord et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eNeurIPS 2017\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Generative_Modeling/VQ-VAE\"\u003eVQ-VAE\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eVQ-VAE-2\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://proceedings.neurips.cc/paper/2019/hash/5f8e2fa1718d1bbcadf1cd9c7a54fb8c-Abstract.html\"\u003eA. Razavi et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eNeurIPS 2019\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Generative_Modeling/VQ-VAE-2\"\u003eVQ-VAE-2\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd rowspan=\"1\"\u003eGANs\u003c/td\u003e\n    \u003ctd\u003eDCGAN\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://arxiv.org/abs/1511.06434\"\u003eA. Radford et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eICLR 2016\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Generative_Modeling/DCGAN\"\u003eDCGAN\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd rowspan=\"4\"\u003eFlows\u003c/td\u003e\n    \u003ctd\u003ePlanar Flow\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://proceedings.mlr.press/v37/rezende15\"\u003eD. Rezende et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eICML 2015\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Generative_Modeling/Planar-Flow2d\"\u003ePlanar-Flow2d\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eRadial Flow\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://proceedings.mlr.press/v37/rezende15\"\u003eD. Rezende et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eICML 2015\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Generative_Modeling/Radial-Flow2d\"\u003eRadial-Flow2d\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eReal NVP\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://arxiv.org/abs/1605.08803\"\u003eL. Dinh et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eICLR 2017\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Generative_Modeling/Real-NVP2d\"\u003eReal-NVP2d\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eGlow\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://arxiv.org/abs/1807.03039\"\u003eD. P. Kingma et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eNeurIPS 2018\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Generative_Modeling/Glow\"\u003eGlow\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd rowspan=\"5\"\u003eDiffusion Models\u003c/td\u003e\n    \u003ctd\u003eDDPM\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://arxiv.org/abs/2006.11239\"\u003eJ. Ho et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eNeurIPS 2020\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Generative_Modeling/DDPM2d\"\u003eDDPM2d\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eDDIM\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://arxiv.org/abs/2010.02502\"\u003eJ. Song et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eICLR 2021\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Generative_Modeling/DDIM2d\"\u003eDDIM2d\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003ePNDM\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://arxiv.org/abs/2202.09778\"\u003eL. Liu et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eICLR 2022\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Generative_Modeling/PNDM2d\"\u003ePNDM2d\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eLDM\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://openaccess.thecvf.com/content/CVPR2022/html/Rombach_High-Resolution_Image_Synthesis_With_Latent_Diffusion_Models_CVPR_2022_paper\"\u003eR. Rombach et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eCVPR 2022\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Generative_Modeling/LDM\"\u003eLDM\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eDiffusion Transformer\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://openaccess.thecvf.com/content/ICCV2023/html/Peebles_Scalable_Diffusion_Models_with_Transformers_ICCV_2023_paper.html\"\u003eW. Peebles et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eICCV 2023\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Generative_Modeling/DiT\"\u003eDiT\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd rowspan=\"2\"\u003eFlow Matching\u003c/td\u003e\n    \u003ctd\u003eFlow Matching\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://openreview.net/forum?id=PqvMRDCJT9t\"\u003eY. Lipman et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eICLR 2023\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Generative_Modeling/FM2d\"\u003eFM2d\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eRectified Flow\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://openreview.net/forum?id=XVjTT1nw5z\"\u003eX. Liu et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eICLR 2023\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Generative_Modeling/RF2d\"\u003eRF2d\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd rowspan=\"4\"\u003eAutoregressive Models\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003ePixelCNN\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003e\u003ca href=\"https://proceedings.mlr.press/v48/oord16.html\"\u003eA. v. d. Oord et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003eICML 2016\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Generative_Modeling/PixelCNN-Gray\"\u003ePixelCNN-Gray\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"Generative_Modeling/PixelCNN-RGB\"\u003ePixelCNN-RGB\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n    \u003ctd rowspan=\"2\"\u003ePixelSNAIL\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003e\u003ca href=\"https://proceedings.mlr.press/v80/chen18h.html\"\u003eX. Chen et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003eICML 2018\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Generative_Modeling/PixelSNAIL-Gray\"\u003ePixelSNAIL-Gray\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"Generative_Modeling/PixelSNAIL-RGB\"\u003ePixelSNAIL-RGB\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n\u003c/table\u003e\n\n### 🖼️ Image-to-Image Translation\n\n\u003ctable\u003e\n  \u003ctr\u003e\n    \u003cth\u003eModel\u003c/th\u003e\n    \u003cth\u003ePaper\u003c/th\u003e\n    \u003cth\u003eConference/Journal\u003c/th\u003e\n    \u003cth\u003eCode\u003c/th\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eU-Net\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://arxiv.org/abs/1505.04597\"\u003eO. Ronneberger et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eMICCAI 2015\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Image-to-Image_Translation/U-Net_Regression\"\u003eU-Net Regression\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003ePix2Pix\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://openaccess.thecvf.com/content_cvpr_2017/html/Isola_Image-To-Image_Translation_With_CVPR_2017_paper.html\"\u003eP. Isola et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eCVPR 2017\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Image-to-Image_Translation/Pix2Pix\"\u003ePix2Pix\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eCycleGAN\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://openaccess.thecvf.com/content_iccv_2017/html/Zhu_Unpaired_Image-To-Image_Translation_ICCV_2017_paper.html\"\u003eJ.-Y. Zhu et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eICCV 2017\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Image-to-Image_Translation/CycleGAN\"\u003eCycleGAN\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n\u003c/table\u003e\n\n### 🔍 Super Resolution\n\n\u003ctable\u003e\n  \u003ctr\u003e\n    \u003cth\u003eModel\u003c/th\u003e\n    \u003cth\u003ePaper\u003c/th\u003e\n    \u003cth\u003eConference/Journal\u003c/th\u003e\n    \u003cth\u003eCode\u003c/th\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eSRGAN\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://openaccess.thecvf.com/content_cvpr_2017/html/Ledig_Photo-Realistic_Single_Image_CVPR_2017_paper.html\"\u003eC. Ledig et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eCVPR 2017\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Super_Resolution/SRGAN\"\u003eSRGAN\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eESRGAN\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://openaccess.thecvf.com/content_eccv_2018_workshops/w25/html/Wang_ESRGAN_Enhanced_Super-Resolution_Generative_Adversarial_Networks_ECCVW_2018_paper.html\"\u003eX. Wang et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eECCV 2018\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Super_Resolution/ESRGAN\"\u003eESRGAN\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n\u003c/table\u003e\n\n### 🖌️ Style Transfer\n\n\u003ctable\u003e\n  \u003ctr\u003e\n    \u003cth\u003eModel\u003c/th\u003e\n    \u003cth\u003ePaper\u003c/th\u003e\n    \u003cth\u003eConference/Journal\u003c/th\u003e\n    \u003cth\u003eCode\u003c/th\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eNeural Style Transfer\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://openaccess.thecvf.com/content_cvpr_2016/html/Gatys_Image_Style_Transfer_CVPR_2016_paper.html\"\u003eL. A. Gatys et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eCVPR 2016\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Style_Transfer/NST\"\u003eNST\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eAdaptive Instance Normalization\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://openaccess.thecvf.com/content_iccv_2017/html/Huang_Arbitrary_Style_Transfer_ICCV_2017_paper.html\"\u003eX. Huang et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eICCV 2017\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Style_Transfer/AdaIN\"\u003eAdaIN\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n\u003c/table\u003e\n\n### 🧩 Semantic Segmentation\n\n\u003ctable\u003e\n  \u003ctr\u003e\n    \u003cth\u003eModel\u003c/th\u003e\n    \u003cth\u003ePaper\u003c/th\u003e\n    \u003cth\u003eConference/Journal\u003c/th\u003e\n    \u003cth\u003eCode\u003c/th\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eSegNet\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://arxiv.org/abs/1511.00561\"\u003eV. Badrinarayanan et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eCVPR 2015\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Semantic_Segmentation/SegNet\"\u003eSegNet\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eU-Net\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://arxiv.org/abs/1505.04597\"\u003eO. Ronneberger et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eMICCAI 2015\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Semantic_Segmentation/U-Net_Classification\"\u003eU-Net Classification\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n\u003c/table\u003e\n\n### 🎯 Object Detection\n\n\u003ctable\u003e\n  \u003ctr\u003e\n    \u003cth\u003eModel\u003c/th\u003e\n    \u003cth\u003ePaper\u003c/th\u003e\n    \u003cth\u003eConference/Journal\u003c/th\u003e\n    \u003cth\u003eCode\u003c/th\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eYOLOv1\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://www.cv-foundation.org/openaccess/content_cvpr_2016/html/Redmon_You_Only_Look_CVPR_2016_paper.html\"\u003eJ. Redmon et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eCVPR 2016\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Object_Detection/YOLOv1\"\u003eYOLOv1\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eYOLOv2\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://openaccess.thecvf.com/content_cvpr_2017/html/Redmon_YOLO9000_Better_Faster_CVPR_2017_paper.html\"\u003eJ. Redmon et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eCVPR 2017\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Object_Detection/YOLOv2\"\u003eYOLOv2\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eYOLOv3\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://arxiv.org/abs/1804.02767\"\u003eJ. Redmon et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003earXiv 2018\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Object_Detection/YOLOv3\"\u003eYOLOv3\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eYOLOv5\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/ultralytics/yolov5\"\u003eUltralytics\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003e-\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Object_Detection/YOLOv5\"\u003eYOLOv5\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eYOLOv8\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/ultralytics/ultralytics\"\u003eUltralytics\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003e-\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Object_Detection/YOLOv8\"\u003eYOLOv8\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n\u003c/table\u003e\n\n### 🧠 Representation Learning\n\n\u003ctable\u003e\n  \u003ctr\u003e\n    \u003cth\u003eModel\u003c/th\u003e\n    \u003cth\u003ePaper\u003c/th\u003e\n    \u003cth\u003eConference/Journal\u003c/th\u003e\n    \u003cth\u003eCode\u003c/th\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eSimCLR\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://proceedings.mlr.press/v119/chen20j.html\"\u003eT. Chen et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eICML 2020\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Representation_Learning/SimCLR\"\u003eSimCLR\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eMasked Autoencoder\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://openaccess.thecvf.com/content/CVPR2022/html/He_Masked_Autoencoders_Are_Scalable_Vision_Learners_CVPR_2022_paper\"\u003eK. He et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eCVPR 2022\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Representation_Learning/MAE\"\u003eMAE\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n\u003c/table\u003e\n\n### 🌐 View Synthesis\n\n\u003ctable\u003e\n  \u003ctr\u003e\n    \u003cth\u003eModel\u003c/th\u003e\n    \u003cth\u003ePaper\u003c/th\u003e\n    \u003cth\u003eConference/Journal\u003c/th\u003e\n    \u003cth\u003eCode\u003c/th\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eNeural Radiance Field\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://arxiv.org/abs/2003.08934\"\u003eB. Mildenhall et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eECCV 2020\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"View_Synthesis/NeRF\"\u003eNeRF\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e3D Gaussian Splatting\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://arxiv.org/abs/2308.04079\"\u003eB. Kerbl et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eSIGGRAPH 2023\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"View_Synthesis/3DGS\"\u003e3DGS\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n\u003c/table\u003e\n\n### 🚨 Anomaly Detection\n\n\u003ctable\u003e\n  \u003ctr\u003e\n    \u003cth\u003eModel\u003c/th\u003e\n    \u003cth\u003ePaper\u003c/th\u003e\n    \u003cth\u003eConference/Journal\u003c/th\u003e\n    \u003cth\u003eCode\u003c/th\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eAnoGAN\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://arxiv.org/abs/1703.05921\"\u003eT. Schlegl et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eIPMI 2017\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Anomaly_Detection/AnoGAN2d\"\u003eAnoGAN2d\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eDAGMM\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://openreview.net/forum?id=BJJLHbb0-\"\u003eB. Zong et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eICLR 2018\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Anomaly_Detection/DAGMM2d\"\u003eDAGMM2d\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eEGBAD\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://arxiv.org/abs/1802.06222\"\u003eH. Zenati et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eICLR Workshop 2018\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Anomaly_Detection/EGBAD2d\"\u003eEGBAD2d\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eGANomaly\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://arxiv.org/abs/1805.06725\"\u003eS. Akçay et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eACCV 2018\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Anomaly_Detection/GANomaly2d\"\u003eGANomaly2d\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003eSkip-GANomaly\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://arxiv.org/abs/1901.08954\"\u003eS. Akçay et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eIJCNN 2019\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Anomaly_Detection/Skip-GANomaly2d\"\u003eSkip-GANomaly2d\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003ePaDiM\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://arxiv.org/abs/2011.08785\"\u003eT. Defard et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eICPR Workshops 2020\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Anomaly_Detection/PaDiM\"\u003ePaDiM\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003ePatchCore\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://openaccess.thecvf.com/content/CVPR2022/html/Roth_Towards_Total_Recall_in_Industrial_Anomaly_Detection_CVPR_2022_paper.html\"\u003eK. Roth et al.\u003c/a\u003e\u003c/td\u003e\n    \u003ctd\u003eCVPR 2022\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"Anomaly_Detection/PatchCore\"\u003ePatchCore\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n\u003c/table\u003e\n\n  \n## 📦 Requirement (Library)\n\n\u003cdetails\u003e\n\u003csummary\u003eDetails\u003c/summary\u003e\n\n### 1. PyTorch C++\nPlease select the environment to use as follows on PyTorch official. \u003cbr\u003e\nPyTorch official : https://pytorch.org/ \u003cbr\u003e\n***\nPyTorch Build : Stable (2.10.0) \u003cbr\u003e\nYour OS : Linux \u003cbr\u003e\nPackage : LibTorch \u003cbr\u003e\nLanguage : C++ / Java \u003cbr\u003e\nRun this Command : Download here (cxx11 ABI) \u003cbr\u003e\nCUDA 12.6 : https://download.pytorch.org/libtorch/cu126/libtorch-shared-with-deps-2.10.0%2Bcu126.zip \u003cbr\u003e\nCUDA 12.8 : https://download.pytorch.org/libtorch/cu128/libtorch-shared-with-deps-2.10.0%2Bcu128.zip \u003cbr\u003e\nCUDA 13.0 : https://download.pytorch.org/libtorch/cu130/libtorch-shared-with-deps-2.10.0%2Bcu130.zip \u003cbr\u003e\nCPU : https://download.pytorch.org/libtorch/cpu/libtorch-shared-with-deps-2.10.0%2Bcpu.zip \u003cbr\u003e\n***\n\n### 2. OpenCV\nversion : 3.0.0 or more \u003cbr\u003e\nThis is used for pre-processing and post-processing. \u003cbr\u003e\nPlease refer to other sites for more detailed installation method.\n\n### 3. OpenMP\nThis is used to load data in parallel. \u003cbr\u003e\n(It may be installed on standard Linux OS.)\n\n### 4. Boost\nThis is used for command line arguments, etc. \u003cbr\u003e\n~~~\n$ sudo apt install libboost-dev libboost-all-dev\n~~~\n\n### 5. Gnuplot\nThis is used to display loss graph. \u003cbr\u003e\n~~~\n$ sudo apt install gnuplot\n~~~\n\n### 6. libpng/png++/zlib\nThis is used to load and save index-color image in semantic segmentation. \u003cbr\u003e\n~~~\n$ sudo apt install libpng-dev libpng++-dev zlib1g-dev\n~~~\n\n\u003c/details\u003e\n\n## 🏃 Preparation (Run)\n\n\u003cdetails\u003e\n\u003csummary\u003eDetails\u003c/summary\u003e\n\n### 1. Git Clone\n~~~\n$ git clone https://github.com/koba-jon/pytorch_cpp.git\n$ cd pytorch_cpp\n~~~\n\n### 2. Path Setting\n~~~\n$ vi utils/CMakeLists.txt\n~~~\nPlease change the 4th line of \"CMakeLists.txt\" according to the path of the directory \"libtorch\". \u003cbr\u003e\nThe following is an example where the directory \"libtorch\" is located directly under the directory \"HOME\".\n~~~\n3: # LibTorch\n4: set(LIBTORCH_DIR $ENV{HOME}/libtorch)\n5: list(APPEND CMAKE_PREFIX_PATH ${LIBTORCH_DIR})\n~~~\n\n### 3. Compiler Install\nIf you don't have g++ version 8 or above, install it.\n~~~\n$ sudo apt install g++-8\n~~~\n\n### 4. Execution\nPlease move to the directory of each model and refer to \"README.md\".\n\n\u003c/details\u003e\n  \n## 🛠️ Utility\n\n\u003cdetails\u003e\n\u003csummary\u003eDetails\u003c/summary\u003e\n\n### 1. Making Original Dataset\nPlease create a link for the original dataset.\u003cbr\u003e\nThe following is an example of \"AE2d\" using \"celebA\" Dataset.\n~~~\n$ cd Dimensionality_Reduction/AE2d/datasets\n$ ln -s \u003cdataset_path\u003e ./celebA_org\n~~~\nYou should substitute the path of dataset for \"\u003cdataset_path\u003e\".\u003cbr\u003e\nPlease make sure you have training or test data directly under \"\u003cdataset_path\u003e\".\n~~~\n$ vi ../../../scripts/hold_out.sh\n~~~\nPlease edit the file for original dataset.\n~~~\n#!/bin/bash\n\nSCRIPT_DIR=$(cd $(dirname $0); pwd)\n\npython3 ${SCRIPT_DIR}/hold_out.py \\\n    --input_dir \"celebA_org\" \\\n    --output_dir \"celebA\" \\\n    --train_rate 9 \\\n    --valid_rate 1\n~~~\nBy running this file, you can split it into training and validation data.\n~~~\n$ sudo apt install python3 python3-pip\n$ pip3 install natsort\n$ sh ../../../scripts/hold_out.sh\n$ cd ../../..\n~~~\n\n### 2. Data Input System\nThere are transform, dataset and dataloader for data input in this repository.\u003cbr\u003e\nIt corresponds to the following source code in the directory, and we can add new function to the source code below.\n- transforms.cpp\n- transforms.hpp\n- datasets.cpp\n- datasets.hpp\n- dataloader.cpp\n- dataloader.hpp\n\n### 3. Check Progress\nThere are a feature to check progress for training in this repository.\u003cbr\u003e\nWe can watch the number of epoch, loss, time and speed in training.\u003cbr\u003e\n![util1](https://user-images.githubusercontent.com/56967584/88464264-3f720300-cef4-11ea-85fd-360cb3a424d1.png)\u003cbr\u003e\nIt corresponds to the following source code in the directory.\n- progress.cpp\n- progress.hpp\n\n### 4. Monitoring System\nThere are monitoring system for training in this repository.\u003cbr\u003e\nWe can watch output image and loss graph.\u003cbr\u003e\nThe feature to watch output image is in the \"samples\" in the directory \"checkpoints\" created during training.\u003cbr\u003e\nThe feature to watch loss graph is in the \"graph\" in the directory \"checkpoints\" created during training.\u003cbr\u003e\n![util2](https://user-images.githubusercontent.com/56967584/88464268-40a33000-cef4-11ea-8a3c-da42d4c803b6.png)\u003cbr\u003e\nIt corresponds to the following source code in the directory.\n- visualizer.cpp\n- visualizer.hpp\n\n\u003c/details\u003e\n\n## ⚖️ License\n  \n\u003cdetails\u003e\n\u003csummary\u003eDetails\u003c/summary\u003e\n  \nYou can feel free to use all source code in this repository.\u003cbr\u003e\n(Click [here](LICENSE) for details.)\u003cbr\u003e\n\nBut if you exploit external libraries (e.g. redistribution), you should be careful.\u003cbr\u003e\nAt a minimum, the license notation at the following URL is required.\u003cbr\u003e\nIn addition, third party copyrights belong to their respective owners.\u003cbr\u003e\n\n- PyTorch \u003cbr\u003e\nOfficial : https://pytorch.org/ \u003cbr\u003e\nLicense : https://github.com/pytorch/pytorch/blob/master/LICENSE \u003cbr\u003e\n\n- OpenCV \u003cbr\u003e\nOfficial : https://opencv.org/ \u003cbr\u003e\nLicense : https://opencv.org/license/ \u003cbr\u003e\n\n- OpenMP \u003cbr\u003e\nOfficial : https://www.openmp.org/ \u003cbr\u003e\nLicense : https://gcc.gnu.org/onlinedocs/ \u003cbr\u003e\n\n- Boost \u003cbr\u003e\nOfficial : https://www.boost.org/ \u003cbr\u003e\nLicense : https://www.boost.org/users/license.html \u003cbr\u003e\n\n- Gnuplot \u003cbr\u003e\nOfficial : http://www.gnuplot.info/ \u003cbr\u003e\nLicense : https://sourceforge.net/p/gnuplot/gnuplot-main/ci/master/tree/Copyright \u003cbr\u003e\n\n- libpng/png++/zlib \u003cbr\u003e\nOfficial (libpng) : http://www.libpng.org/pub/png/libpng.html \u003cbr\u003e\nLicense (libpng) : http://www.libpng.org/pub/png/src/libpng-LICENSE.txt \u003cbr\u003e\nOfficial (png++) : https://www.nongnu.org/pngpp/ \u003cbr\u003e\nLicense (png++) : https://www.nongnu.org/pngpp/license.html \u003cbr\u003e\nOfficial (zlib) : https://zlib.net/ \u003cbr\u003e\nLicense (zlib) : https://zlib.net/zlib_license.html \u003cbr\u003e\n\n\u003c/details\u003e\n  \n## 🎉 Conclusion\nPyTorch is famous as a kind of Deep Learning Frameworks.\u003cbr\u003e\nAmong them, Python source code is overflowing on the Web, so we can easily write the source code of Deep Learning in Python.\u003cbr\u003e\nHowever, there is very little source code written in C++ of compiler language.\u003cbr\u003e\nI hope this repository will help many programmers by providing PyTorch sample programs written in C++.\u003cbr\u003e\nIf you have any problems with the source code of this repository, please feel free to \"issue\".\u003cbr\u003e\nLet's have a good development and research life!\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fkoba-jon%2Fpytorch_cpp","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fkoba-jon%2Fpytorch_cpp","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fkoba-jon%2Fpytorch_cpp/lists"}