An open API service indexing awesome lists of open source software.

https://github.com/edmBernard/DockerFiles

Some useful dockerfiles for DeepLearning and Computer Vision
https://github.com/edmBernard/DockerFiles

deep-learning docker dockerfile nvidia-docker opencv

Last synced: 10 months ago
JSON representation

Some useful dockerfiles for DeepLearning and Computer Vision

Awesome Lists containing this project

README

          

# DockerFiles

Some Dockerfiles to install Opencv, ffmpeg and Deeplearning framework. I also use them as a reminder to complicated framework installation.

## Requirements

Most of these docker use the [Nvidia][1] runtime for [Docker][2]

[1]: https://github.com/NVIDIA/nvidia-docker
[2]: https://www.docker.com/

to Use Nvidia runtime as default runtime add this in `/etc/docker/daemon.json`
```javascript
{
"default-runtime": "nvidia",
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
}
}
```

## Building images

### With docker api

```bash
sudo docker build --runtime=nvidia -t image_name -f dockerfile_name .
```

or (if nvidia is the default runtime)

```bash
sudo docker build -t image_name -f dockerfile_name .
```

### With Make

I made a `Makefile` to automate the build process.

```bash
make IMAGE_NAME
```
The image is the concatenation of the Library name and the tag (ex: `opencv` and `_gpu` is create by `make opencv_gpu`)

*Note1:* `make` accept `NOCACHE=ON` argument to force the rebuild of all images

*Note2:* As image depends from each other, `make` will automatically build images dependency. (ex: if you build `opencv_cpu` image, `pythonlib_cpu` and `ffmpeg_cpu` will be create as well by the command `make opencv:cpu`)

#### List of all available images
| Library | Tag | Description |
|:-- |:-- |:-- |
| `all` |
`_cpu`
`_gpu`
`_alpine`| all images
all cpu images
all gpu images
all alpine images|
| `pythonlib` | `_cpu`
`_gpu` | my standard configuration with all library I use |
| `ffmpeg` | `_cpu`
`_gpu` | with [ffmpeg](https://ffmpeg.org/) compiled from source with x264, h265 and nvencode on gpu images |
| `opencv` | `_cpu`
`_gpu` | with [opencv](http://opencv.org/) compiled from source |
| `redis` | `_cpu`
`_gpu` | with [redis](https://redis.io/) compiled from source |
| `mxnet` | `_cpu`
`_gpu` | with [mxnet](http://mxnet.io/) compiled from source and [mkl](https://software.intel.com/en-us/mkl)
with [mxnet](http://mxnet.io/) compiled from source and gpu support |
| `nnvm` | `_cpu`
`_gpu_opencl`
`_cpu_opencl`| with [nnvm](https://github.com/dmlc/nnvm), [tvm](https://github.com/dmlc/tvm) compiled from source
with [nnvm](https://github.com/dmlc/nnvm), [tvm](https://github.com/dmlc/tvm) and [opencl](https://fr.wikipedia.org/wiki/OpenCL) compiled from source and gpu support
with [nnvm](https://github.com/dmlc/nnvm), [tvm](https://github.com/dmlc/tvm) and [opencl](https://fr.wikipedia.org/wiki/OpenCL) compiled from source|
| `tensorflow` | `_cpu`
`_gpu` | with [tensoflow](https://www.tensorflow.org/)|
| `pytorch` | `_cpu`
`_gpu` | with [pytorch](http://pytorch.org/) and [pytorch/vision](https://github.com/pytorch/vision)|
| `numba` | `_cpu`
`_gpu` | with [numba](http://numba.pydata.org/) |
| `jupyter` | `_cpu`
`_gpu` | a [jupyter](http://jupyter.org/) server with `pass` as password |
| `vcpkg` | `_cpu` | with [vcpkg](https://github.com/microsoft/vcpkg) installed |
| `alpine` | `_redis`
`_pythonlib`
`_node`
`_dotnet`
`_vcpkg`
`_rust`* | some usefull image based on [alpine](https://alpinelinux.org/) to have small memory footprint |

\* : rust lib `proc-macro` don't work on musl. If you need it use the following [image](https://github.com/emk/rust-musl-builder) to cross compile

## Create container (with CPU Only)

```
docker run -it --name container_name -p 0.0.0.0:6000:7000 -p 0.0.0.0:8000:9000 -v shared/path/on/host:/shared/path/in/container image_name:latest /bin/bash
```

##### Unfold

```bash
sudo docker run -it # -it option allow interaction with the container
--name container_name # Name of the created container
-p 0.0.0.0:6000:7000 # Port redirection (redirect host port 6000 to container port 7000)
-p 0.0.0.0:8000:9000 # Port redirection (redirect host port 8000 to container port 9000)
-v shared/path/on/host:/shared/path/in/container # Configure a shared directory between host and container
image_name:latest # Image name to use for container creation
/bin/bash # Command to execute
```
***Note***: Don't specify ports if you don't use them. As you can't have containers listenning the same host port. (cf. "Alias to create Jupyter server" for random port assignation).

## Create container (with GPU support)

```
NV_GPU='0' docker run -it --runtime=nvidia --name container_name -p 0.0.0.0:6000:7000 -p 0.0.0.0:8000:9000 -v shared/path/on/host:/shared/path/in/container image_name:latest /bin/bash
```

##### Unfold

```bash
NV_GPU='0' # GPU id give by nvidia-smi ('0', '1' or '0,1' for GPU0, GPU2 or both)
sudo docker run -it # -it option allow interaction with the container
--runtime=nvidia # Allow docker to run with nvidia runtime to support GPU
--name container_name # Name of the created container
-p 0.0.0.0:6000:7000 # Port redirection (redirect host port 6000 to container port 7000)
-p 0.0.0.0:8000:9000 # Port redirection (redirect host port 8000 to container port 9000)
-v shared/path/on/host:/shared/path/in/container # Configure a shared directory between host and container
image_name:latest # Image name to use for container creation
/bin/bash # Command to execute
```
***Note***: Don't specify ports if you don't use them. As you can't have containers listenning the same host port. (cf. "Alias to create Jupyter server" for random port assignation in a range).

## Advance use

### Open new terminal in running container

```bash
docker exec -it container_name /bin/bash
```

### Alias to create Jupyter server
#### CPU version

```bash
alias jupserver='docker run -it -d -p 0.0.0.0:5000-5010:8888 -v $PWD:/home/dev/host jupyter_cpu:latest'
```

***Note***: If host port is a range of ports and container port a single one, docker will choose a random free port in the specified range.

#### GPU version

```bash
alias jupserver='docker run -it -d -p 0.0.0.0:5000-5010:8888 -v $PWD:/home/dev/host jupyter_gpu:latest'
```

***Note***: If host port is a range of ports and container port a single one, docker will choose a random free port in the specified range.

### Alias to create a isolated devbox
```bash
alias devbox='docker run -it --rm -v $PWD:/home/dev/host mxnet:latest'
```

## Fixed version

Sometime update in library can break compatibility with other module.
In certain Dockerfile there is fixed version to keep older version.
Other tools can be download with last version so I need to change manually version at each update.
Most of the time, I try to keep last version for all tools.
In some case last version fix bug or the reason I fixed the version without I know it.

| Tools | version | Docker image | Description |
| -- | -- | -- | -- |
| cuda | 11.1 | all gpu images | -- |
| cudnn | 8 | all gpu images | -- |
| opencv | 4.5.4 | opencv | -- |
| ffmpeg | 4.3.2 | ffmpeg | api break should be fix in opencv soon |
| pytorch | 1.9.1 | pytorch | -- |

## Script

The `generate.py` script available in script folder allow three things.
* `generate.py amalgamation`: generate Dockerfile for each end image without dependency. It generate a dockerfile with all depency expanded.
* `generate.py makefile`: update makefile with all images found in folders. Useful after amalgamation generation.
* `generate.py concatenate`: allow to concatenate dockerfile. For example, if you want to add jupyter support on pytorch images. `generate.py concatenate --filename ../super/pytorch/Dockerfile.jupyter --base pytorch_cpu -- jupyter_cpu` will generate a new dockerfile that depends of pytorch_cpu and add jupyter_cpu installation. This image will be available - after makefile update - via `make pytorch_jupyter`

### Example :
```bash
./generate.py concatenate --filename ../super/jupyter/Dockerfile.mxnet --base mxnet_cpu_mkl -- jupyter_cpu
./generate.py concatenate --filename ../super/jupyter/Dockerfile.opencv --base opencv_cpu -- jupyter_cpu
./generate.py concatenate --filename ../super/jupyter/Dockerfile.pythonlib --base pythonlib_cpu -- jupyter_cpu
```

tester