An open API service indexing awesome lists of open source software.

https://github.com/koba-jon/pytorch_cpp

Deep Learning sample programs using PyTorch in C++
https://github.com/koba-jon/pytorch_cpp

anomaly-detection autoencoder convolutional-autoencoder cpp dagmm dcgan deep-learning dimensionality-reduction generative-modeling image-to-image-translation libtorch linux multiclass-classification object-detection pix2pix pytorch semantic-segmentation u-net vae yolo

Last synced: 14 days ago
JSON representation

Deep Learning sample programs using PyTorch in C++

Awesome Lists containing this project

README

          



# ๐Ÿ”ฅ PyTorch C++ Samples ๐Ÿ”ฅ

[![Language](https://img.shields.io/badge/Language-C++-blue)]()
[![LibTorch](https://img.shields.io/badge/LibTorch-2.10.0-orange)]()
[![OS](https://img.shields.io/badge/OS-Ubuntu-yellow)]()
[![OS](https://img.shields.io/badge/License-MIT-green)]()
![sample1](sample1.png)
![sample2](sample2.gif)

## ๐Ÿš€ Quick Start (Details: Library, Run)
Requirements: `LibTorch`, `OpenCV`, `OpenMP`, `Boost`, `Gnuplot`, `libpng/png++/zlib`

### 1. Git Clone

~~~
$ git clone https://github.com/koba-jon/pytorch_cpp.git
$ cd pytorch_cpp
$ sudo apt install g++-8
~~~

### 2. Run

**(1) Change Directory** (Model: AE1d)
~~~
$ cd Dimensionality_Reduction/AE1d
~~~

**(2) Build**
~~~
$ mkdir build
$ cd build
$ cmake ..
$ make -j4
$ cd ..
~~~

**(3) Dataset Setting** (Dataset: Normal Distribution Dataset)
~~~
$ cd datasets
$ git clone https://huggingface.co/datasets/koba-jon/normal_distribution_dataset
$ ln -s normal_distribution_dataset/NormalDistribution ./NormalDistribution
$ cd ..
~~~

**(4) Training**
~~~
$ sh scripts/train.sh
~~~

**(5) Test**
~~~
$ sh scripts/test.sh
~~~

## ๐Ÿ”„ Updates (MM/DD/YYYY)

01/22/2026: Release of `v2.10.0`

12/22/2025: Implementation of `AdaIN`

12/20/2025: Implementation of `NST`

12/06/2025: Release of `v2.9.1.4`

12/01/2025: Release of `v2.9.1.3`

12/01/2025: Implementation of `PatchCore`

11/29/2025: Release of `v2.9.1.2`

11/29/2025: Implementation of `PaDiM`

11/27/2025: Implementation of `WideResNet`

11/27/2025: Release of `v2.9.1.1`

See more...

11/24/2025: Implementation of `ESRGAN`

11/21/2025: Implementation of `SRGAN`

11/19/2025: Implementation of `DiT`

11/14/2025: Release of `v2.9.1`

11/01/2025: Implementation of `NeRF` and `3DGS`

10/16/2025: Release of `v2.9.0`

10/16/2025: Implementation of `PixelSNAIL-Gray` and `PixelSNAIL-RGB`

10/14/2025: Implementation of `YOLOv8`

10/13/2025: Implementation of `YOLOv5`

10/09/2025: Implementation of `RF2d`

10/08/2025: Implementation of `FM2d`

10/08/2025: Implementation of `LDM`

10/04/2025: Implementation of `Glow`

10/01/2025: Implementation of `Real-NVP2d`

09/28/2025: Implementation of `Planar-Flow2d` and `Radial-Flow2d`

09/25/2025: Release of `v2.8.0.2`

09/22/2025: Implementation of `PixelCNN-Gray` and `PixelCNN-RGB`

09/18/2025: Implementation of `VQ-VAE-2`

09/16/2025: Implementation of `VQ-VAE`

09/14/2025: Implementation of `PNDM2d`

09/14/2025: Release of `v2.8.0.1`

09/12/2025: Implementation of `SimCLR`

09/11/2025: Implementation of `MAE`

09/10/2025: Implementation of EMA for `DDPM2d` and `DDIM2d`

09/08/2025: Implementation of `EfficientNet`

09/07/2025: Implementation of `CycleGAN`

09/05/2025: Implementation of `ViT`

09/04/2025: Release of `v2.8.0`

09/04/2025: Implementation of `DDIM2d`

09/04/2025: Implementation of `DDPM2d`

06/27/2023: Release of `v2.0.1`

06/27/2023: Create the heatmap for Anomaly Detection

05/07/2023: Release of `v2.0.0`

03/01/2023: Release of `v1.13.1`

09/12/2022: Release of `v1.12.1`

08/04/2022: Release of `v1.12.0`

03/18/2022: Release of `v1.11.0`

02/10/2022: Release of `v1.10.2`

02/09/2022: Implementation of `YOLOv3`

01/09/2022: Release of `v1.10.1`

01/09/2022: Fixed execution error in test on CPU package

11/12/2021: Release of `v1.10.0`

09/27/2021: Release of `v1.9.1`

09/27/2021: Support for using different devices between training and test

09/06/2021: Improved accuracy of time measurement using GPU

06/19/2021: Release of `v1.9.0`

03/29/2021: Release of `v1.8.1`

03/18/2021: Implementation of `Discriminator` from DCGAN

03/17/2021: Implementation of `AE1d`

03/16/2021: Release of `v1.8.0`

03/15/2021: Implementation of `YOLOv2`

02/11/2021: Implementation of `YOLOv1`

01/21/2021: Release of `v1.7.1`

10/30/2020: Release of `v1.7.0`

10/04/2020: Implementation of `Skip-GANomaly2d`

10/03/2020: Implementation of `GANomaly2d`

09/29/2020: Implementation of `EGBAD2d`

09/28/2020: Implementation of `AnoGAN2d`

09/27/2020: Implementation of `SegNet`

09/26/2020: Implementation of `DAE2d`

09/13/2020: Implementation of `ResNet`

09/07/2020: Implementation of `VGGNet`

09/05/2020: Implementation of `AlexNet`

09/02/2020: Implementation of `WAE2d GAN` and `WAE2d MMD`

08/30/2020: Release of `v1.6.0`

06/26/2020: Implementation of `DAGMM2d`

06/26/2020: Release of `v1.5.1`

06/26/2020: Implementation of `VAE2d` and `DCGAN`

06/01/2020: Implementation of `Pix2Pix`

05/29/2020: Implementation of `U-Net Classification`

05/26/2020: Implementation of `U-Net Regression`

04/24/2020: Release of `v1.5.0`

03/23/2020: Implementation of `AE2d`

## ๐Ÿ—๏ธ Implementation

### ๐Ÿ“Š Multiclass Classification


Category
Model
Paper
Conference/Journal
Code


CNNs
AlexNet
A. Krizhevsky et al.
NeurIPS 2012
AlexNet


VGGNet
K. Simonyan et al.
ICLR 2015
VGGNet


ResNet
K. He et al.
CVPR 2016
ResNet


WideResNet
S. Zagoruyko et al.
arXiv 2016
WideResNet


Discriminator
A. Radford et al.
ICLR 2016
Discriminator


EfficientNet
M. Tan et al.
ICML 2019
EfficientNet


Transformers
Vision Transformer
A. Dosovitskiy et al.
ICLR 2021
ViT


### ๐Ÿ”ฝ Dimensionality Reduction


Model
Paper
Conference/Journal
Code


Autoencoder
G. E. Hinton et al.
Science 2006
AE1d


AE2d


Denoising Autoencoder
P. Vincent et al.
ICML 2008
DAE2d

### ๐ŸŽจ Generative Modeling


Category
Model
Paper
Conference/Journal
Code


VAEs
Variational Autoencoder
D. P. Kingma et al.
ICLR 2014
VAE2d


Wasserstein Autoencoder
I. Tolstikhin et al.
ICLR 2018
WAE2d GAN


WAE2d MMD


VQ-VAE
A. v. d. Oord et al.
NeurIPS 2017
VQ-VAE


VQ-VAE-2
A. Razavi et al.
NeurIPS 2019
VQ-VAE-2


GANs
DCGAN
A. Radford et al.
ICLR 2016
DCGAN


Flows
Planar Flow
D. Rezende et al.
ICML 2015
Planar-Flow2d


Radial Flow
D. Rezende et al.
ICML 2015
Radial-Flow2d


Real NVP
L. Dinh et al.
ICLR 2017
Real-NVP2d


Glow
D. P. Kingma et al.
NeurIPS 2018
Glow


Diffusion Models
DDPM
J. Ho et al.
NeurIPS 2020
DDPM2d


DDIM
J. Song et al.
ICLR 2021
DDIM2d


PNDM
L. Liu et al.
ICLR 2022
PNDM2d


LDM
R. Rombach et al.
CVPR 2022
LDM


Diffusion Transformer
W. Peebles et al.
ICCV 2023
DiT


Flow Matching
Flow Matching
Y. Lipman et al.
ICLR 2023
FM2d


Rectified Flow
X. Liu et al.
ICLR 2023
RF2d


Autoregressive Models
PixelCNN
A. v. d. Oord et al.
ICML 2016
PixelCNN-Gray


PixelCNN-RGB

PixelSNAIL
X. Chen et al.
ICML 2018
PixelSNAIL-Gray


PixelSNAIL-RGB

### ๐Ÿ–ผ๏ธ Image-to-Image Translation


Model
Paper
Conference/Journal
Code


U-Net
O. Ronneberger et al.
MICCAI 2015
U-Net Regression


Pix2Pix
P. Isola et al.
CVPR 2017
Pix2Pix


CycleGAN
J.-Y. Zhu et al.
ICCV 2017
CycleGAN

### ๐Ÿ” Super Resolution


Model
Paper
Conference/Journal
Code


SRGAN
C. Ledig et al.
CVPR 2017
SRGAN


ESRGAN
X. Wang et al.
ECCV 2018
ESRGAN

### ๐Ÿ–Œ๏ธ Style Transfer


Model
Paper
Conference/Journal
Code


Neural Style Transfer
L. A. Gatys et al.
CVPR 2016
NST


Adaptive Instance Normalization
X. Huang et al.
ICCV 2017
AdaIN

### ๐Ÿงฉ Semantic Segmentation


Model
Paper
Conference/Journal
Code


SegNet
V. Badrinarayanan et al.
CVPR 2015
SegNet


U-Net
O. Ronneberger et al.
MICCAI 2015
U-Net Classification

### ๐ŸŽฏ Object Detection


Model
Paper
Conference/Journal
Code


YOLOv1
J. Redmon et al.
CVPR 2016
YOLOv1


YOLOv2
J. Redmon et al.
CVPR 2017
YOLOv2


YOLOv3
J. Redmon et al.
arXiv 2018
YOLOv3


YOLOv5
Ultralytics
-
YOLOv5


YOLOv8
Ultralytics
-
YOLOv8

### ๐Ÿง  Representation Learning


Model
Paper
Conference/Journal
Code


SimCLR
T. Chen et al.
ICML 2020
SimCLR


Masked Autoencoder
K. He et al.
CVPR 2022
MAE

### ๐ŸŒ View Synthesis


Model
Paper
Conference/Journal
Code


Neural Radiance Field
B. Mildenhall et al.
ECCV 2020
NeRF


3D Gaussian Splatting
B. Kerbl et al.
SIGGRAPH 2023
3DGS

### ๐Ÿšจ Anomaly Detection


Model
Paper
Conference/Journal
Code


AnoGAN
T. Schlegl et al.
IPMI 2017
AnoGAN2d


DAGMM
B. Zong et al.
ICLR 2018
DAGMM2d


EGBAD
H. Zenati et al.
ICLR Workshop 2018
EGBAD2d


GANomaly
S. Akรงay et al.
ACCV 2018
GANomaly2d


Skip-GANomaly
S. Akรงay et al.
IJCNN 2019
Skip-GANomaly2d


PaDiM
T. Defard et al.
ICPR Workshops 2020
PaDiM


PatchCore
K. Roth et al.
CVPR 2022
PatchCore


## ๐Ÿ“ฆ Requirement (Library)

Details

### 1. PyTorch C++
Please select the environment to use as follows on PyTorch official.

PyTorch official : https://pytorch.org/

***
PyTorch Build : Stable (2.10.0)

Your OS : Linux

Package : LibTorch

Language : C++ / Java

Run this Command : Download here (cxx11 ABI)

CUDA 12.6 : https://download.pytorch.org/libtorch/cu126/libtorch-shared-with-deps-2.10.0%2Bcu126.zip

CUDA 12.8 : https://download.pytorch.org/libtorch/cu128/libtorch-shared-with-deps-2.10.0%2Bcu128.zip

CUDA 13.0 : https://download.pytorch.org/libtorch/cu130/libtorch-shared-with-deps-2.10.0%2Bcu130.zip

CPU : https://download.pytorch.org/libtorch/cpu/libtorch-shared-with-deps-2.10.0%2Bcpu.zip

***

### 2. OpenCV
version : 3.0.0 or more

This is used for pre-processing and post-processing.

Please refer to other sites for more detailed installation method.

### 3. OpenMP
This is used to load data in parallel.

(It may be installed on standard Linux OS.)

### 4. Boost
This is used for command line arguments, etc.

~~~
$ sudo apt install libboost-dev libboost-all-dev
~~~

### 5. Gnuplot
This is used to display loss graph.

~~~
$ sudo apt install gnuplot
~~~

### 6. libpng/png++/zlib
This is used to load and save index-color image in semantic segmentation.

~~~
$ sudo apt install libpng-dev libpng++-dev zlib1g-dev
~~~

## ๐Ÿƒ Preparation (Run)

Details

### 1. Git Clone
~~~
$ git clone https://github.com/koba-jon/pytorch_cpp.git
$ cd pytorch_cpp
~~~

### 2. Path Setting
~~~
$ vi utils/CMakeLists.txt
~~~
Please change the 4th line of "CMakeLists.txt" according to the path of the directory "libtorch".

The following is an example where the directory "libtorch" is located directly under the directory "HOME".
~~~
3: # LibTorch
4: set(LIBTORCH_DIR $ENV{HOME}/libtorch)
5: list(APPEND CMAKE_PREFIX_PATH ${LIBTORCH_DIR})
~~~

### 3. Compiler Install
If you don't have g++ version 8 or above, install it.
~~~
$ sudo apt install g++-8
~~~

### 4. Execution
Please move to the directory of each model and refer to "README.md".


## ๐Ÿ› ๏ธ Utility

Details

### 1. Making Original Dataset
Please create a link for the original dataset.

The following is an example of "AE2d" using "celebA" Dataset.
~~~
$ cd Dimensionality_Reduction/AE2d/datasets
$ ln -s ./celebA_org
~~~
You should substitute the path of dataset for "".

Please make sure you have training or test data directly under "".
~~~
$ vi ../../../scripts/hold_out.sh
~~~
Please edit the file for original dataset.
~~~
#!/bin/bash

SCRIPT_DIR=$(cd $(dirname $0); pwd)

python3 ${SCRIPT_DIR}/hold_out.py \
--input_dir "celebA_org" \
--output_dir "celebA" \
--train_rate 9 \
--valid_rate 1
~~~
By running this file, you can split it into training and validation data.
~~~
$ sudo apt install python3 python3-pip
$ pip3 install natsort
$ sh ../../../scripts/hold_out.sh
$ cd ../../..
~~~

### 2. Data Input System
There are transform, dataset and dataloader for data input in this repository.

It corresponds to the following source code in the directory, and we can add new function to the source code below.
- transforms.cpp
- transforms.hpp
- datasets.cpp
- datasets.hpp
- dataloader.cpp
- dataloader.hpp

### 3. Check Progress
There are a feature to check progress for training in this repository.

We can watch the number of epoch, loss, time and speed in training.

![util1](https://user-images.githubusercontent.com/56967584/88464264-3f720300-cef4-11ea-85fd-360cb3a424d1.png)

It corresponds to the following source code in the directory.
- progress.cpp
- progress.hpp

### 4. Monitoring System
There are monitoring system for training in this repository.

We can watch output image and loss graph.

The feature to watch output image is in the "samples" in the directory "checkpoints" created during training.

The feature to watch loss graph is in the "graph" in the directory "checkpoints" created during training.

![util2](https://user-images.githubusercontent.com/56967584/88464268-40a33000-cef4-11ea-8a3c-da42d4c803b6.png)

It corresponds to the following source code in the directory.
- visualizer.cpp
- visualizer.hpp

## โš–๏ธ License

Details

You can feel free to use all source code in this repository.

(Click [here](LICENSE) for details.)

But if you exploit external libraries (e.g. redistribution), you should be careful.

At a minimum, the license notation at the following URL is required.

In addition, third party copyrights belong to their respective owners.

- PyTorch

Official : https://pytorch.org/

License : https://github.com/pytorch/pytorch/blob/master/LICENSE

- OpenCV

Official : https://opencv.org/

License : https://opencv.org/license/

- OpenMP

Official : https://www.openmp.org/

License : https://gcc.gnu.org/onlinedocs/

- Boost

Official : https://www.boost.org/

License : https://www.boost.org/users/license.html

- Gnuplot

Official : http://www.gnuplot.info/

License : https://sourceforge.net/p/gnuplot/gnuplot-main/ci/master/tree/Copyright

- libpng/png++/zlib

Official (libpng) : http://www.libpng.org/pub/png/libpng.html

License (libpng) : http://www.libpng.org/pub/png/src/libpng-LICENSE.txt

Official (png++) : https://www.nongnu.org/pngpp/

License (png++) : https://www.nongnu.org/pngpp/license.html

Official (zlib) : https://zlib.net/

License (zlib) : https://zlib.net/zlib_license.html


## ๐ŸŽ‰ Conclusion
PyTorch is famous as a kind of Deep Learning Frameworks.

Among them, Python source code is overflowing on the Web, so we can easily write the source code of Deep Learning in Python.

However, there is very little source code written in C++ of compiler language.

I hope this repository will help many programmers by providing PyTorch sample programs written in C++.

If you have any problems with the source code of this repository, please feel free to "issue".

Let's have a good development and research life!