Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/luc99hen/fta
An Adapter aimed for using a PyTorch model directly from a Fortran program
https://github.com/luc99hen/fta
fortran libtorch pytorch
Last synced: 3 months ago
JSON representation
An Adapter aimed for using a PyTorch model directly from a Fortran program
- Host: GitHub
- URL: https://github.com/luc99hen/fta
- Owner: luc99hen
- Created: 2021-09-12T13:36:53.000Z (over 3 years ago)
- Default Branch: main
- Last Pushed: 2024-07-10T05:52:25.000Z (7 months ago)
- Last Synced: 2024-10-12T12:41:58.556Z (4 months ago)
- Topics: fortran, libtorch, pytorch
- Language: Shell
- Homepage:
- Size: 40.7 MB
- Stars: 16
- Watchers: 4
- Forks: 3
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
FTA is a Fortran-Torch-Adapter aimed for integrating deep learning model into Fortran environment. It has been used in replacing parameterization schemes in WRF with a [published paper](https://www.frontiersin.org/journals/earth-science/articles/10.3389/feart.2023.1149566/full)
```Fortran
! fortran program main
!
! use PyTorch modle resnet.pt in a Fortran program with FTA library torch_wrapper
!program main
use torch_wrapper
implicit noneCHARACTER(100), TARGET :: model_loc;
TYPE(ftorchmodel) :: model;
INTEGER :: res
REAL(C_float) :: input(1, 3, 224, 224) = 1.0
REAL :: output(1, 1000)
INTEGER(C_INT) :: use_gpu = 1 ! should be compatible with your script module devicemodel_loc = "/home/dl/luc/FTB/demo/lib/resnet.pt"//CHAR(0) ! CHAR(0) is necessary for C string termination
print *, "Torch Start"
model = resnet18_new(model_loc, use_gpu) ! initialize the model
res = resnet18_forward(model, input, output) ! use the model to perform reasoning task
call resnet18_delete(model) ! delete this modelprint *, output(1, 1)
print *, "Torch End"end program main
```# Dependencies
- CMake (>=3.0)
- gcc or icc compiler
- [libtorch](https://pytorch.org/)
- Docker & nvidia-container-toolkit (OPTIONAL)# How to Use
> Notice: we assume you already have a machine with Nvidia Driver installed properly.
## Manually setup
1. download libtorch
1. match your local CUDA version
2. C++11 ABI ([choose this version if your compiler is `icc`](https://stackoverflow.com/questions/66192285/libtorch-works-with-g-but-fails-with-intel-compiler))
2. define your model in the [configuration file](./src/configure.conf)
- data type supported: float, int, double
- data size format: a, b, c
- model name: will be used for interface name
3. run [`./build.sh`](./src/build.sh)
4. use the `torch_wrapper` library in your Fortran program## Setup with Docker
1. install [nvidia-container-toolkit & Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#getting-started)
2. get image
1. build locally `docker build -t fortran-torch-adapter .`
2. Or you can pull from dockerhub `docker pull 1813927768/fortran-torch-adapter:latest`
3. start the container `docker run -it --rm --gpus all fortran-torch-adapter /bin/bash`
4. run `./build.sh` in the container