https://github.com/mutablelogic/docker-cudaml
Dockerfiles for ML inference
https://github.com/mutablelogic/docker-cudaml
Last synced: 2 months ago
JSON representation
Dockerfiles for ML inference
- Host: GitHub
- URL: https://github.com/mutablelogic/docker-cudaml
- Owner: mutablelogic
- License: apache-2.0
- Created: 2024-07-25T06:38:00.000Z (10 months ago)
- Default Branch: main
- Last Pushed: 2024-08-11T08:37:35.000Z (10 months ago)
- Last Synced: 2025-02-05T00:43:13.574Z (4 months ago)
- Language: Go
- Homepage:
- Size: 42 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# docker-cudaml
Repository which has some base images for running CUDA and cuDNN on Intel and ARM architectures.
## CUDA Images
If you want to use an NVIDIA GPU, You can use the following two images as the basis for your own images:
* `ghcr.io/mutablelogic/cuda-dev:1.0.2` - This image is based on Ubuntu 22.04 and includes the 12.6 CUDA toolkit and compiler build tools
* `ghcr.io/mutablelogic/cuda-rt:1.0.2` - This image is based on Ubuntu 22.04 and includes the 12.6 CUDA runtime libraries.When running a runtime container, then install the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) first. Then you can run the container with the following command:
```bash
docker run \
--name --rm \
--runtime nvidia --gpus all
```