https://github.com/rozukke/rust-gan
Small demo of PyTorch and CUDA run from a Rust wrapper with Nix deployment.
https://github.com/rozukke/rust-gan
Last synced: 8 months ago
JSON representation
Small demo of PyTorch and CUDA run from a Rust wrapper with Nix deployment.
- Host: GitHub
- URL: https://github.com/rozukke/rust-gan
- Owner: rozukke
- License: mit
- Created: 2024-07-26T04:57:46.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-08-13T12:30:19.000Z (about 1 year ago)
- Last Synced: 2025-01-10T04:27:59.505Z (9 months ago)
- Language: Python
- Size: 300 KB
- Stars: 0
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# rust-gan
[](https://github.com/rozukke/rust-gan/blob/main/LICENSE)
[](https://github.com/rozukke)
[](https://www.rust-lang.org)This is a repository meant to demonstrate running a PyTorch script from a Rust binary with versioning and deployment using Nix.
## Installation
Tested working on WSL with `Ubuntu-24.04 LTS`. Please ensure that you have the latest version Nvidia driver installed on your
system (tested with driver version `560.70`). Starting a shell with `nix develop` should print "CUDA found!" to the console.## Usage
For development purposes, use `nix develop` to start a dev shell with all required packages. Place an appropriate model file
into the `model` directory. One such model can be downloaded [here](https://drive.google.com/file/d/1fCKufxu-a0vewCP1Y_7DP_JmrPxBXYCF/view?usp=sharing).
It is recommended to use this model as a specific architecture is required.### Inference with shell
- The executable is in the format `rust-gan [OPTIONS] `, where the device is either `cpu` or `gpu`. If CUDA is not
accessible, the device will be overridden to use CPU with a warning.
- Provide a path to your image using `--input` and a path to save to with `--output`. Alternatively, run without arguments
to use the default input provided (found in `input/lowres.png`, output saved to `input/highres.png`).
- Provide a path to the model using a `--model` flag. Automatic model detection is not very compatible with nix packaging,
but might be added later.
- Use half precision with the flag `--half-precision`. Will be slower on CPU than full precision.### Using Nix package
- It should be possible to run the nix package using `nix run ./ -- gpu --model /path/to/model`. This is currently tested with a local clone
(remote flake may not work as expected).> [!IMPORTANT]
> There seems to be a bug with some recent nightly Nix releases regarding local paths. To fix the package not building,
> the solution seems to be to force input fetching. `cannot fetch input 'path:./pysrc...`