https://github.com/ricsanfre/ollama
Running Ollama with Docker Compose
https://github.com/ricsanfre/ollama
Last synced: 3 months ago
JSON representation
Running Ollama with Docker Compose
- Host: GitHub
- URL: https://github.com/ricsanfre/ollama
- Owner: ricsanfre
- Created: 2025-07-25T08:32:49.000Z (6 months ago)
- Default Branch: master
- Last Pushed: 2025-07-25T09:48:44.000Z (6 months ago)
- Last Synced: 2025-09-25T19:28:55.717Z (4 months ago)
- Homepage:
- Size: 1.95 KB
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
Run Ollama an Open WebUI with docker compose.
## Pre-requisites
- Docker & docker-compose or Docker Desktop.
- NVIDIA GPU — For GPU use, otherwise we’ll use the laptop’s CPU.
- Python version 3
- Disk space: The model files require at least 10GB of free space, but that is not enough. You should also have 20% of your total disk space available. Otherwise, you may encounter problems when starting Ollama, even if you have enough space for the model files.
### Install NVDIA Container Toolkit
Follow instructions in [NVIDIA Container Toolkit: Installation Guide](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
## Instructions
1. **Clone this repository** (if you haven't already):
```bash
git clone
cd ollama
```
2. **Start the services**:
```bash
docker compose up -d
```
This will start both the Ollama backend and the Open WebUI frontend.
3. **Access the Web UI**:
- Open your browser and go to: [http://localhost:3000](http://localhost:3000)
4. **Stopping the services**:
```bash
docker compose down
```
### Using GPU for Inferencing
If you want to use GPU of your laptop for inferencing:
Update `docker-compose.yaml` file adding the
```yaml
services:
ollama:
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
```
### Notes
- Ollama models are stored in the `./models` directory.
- Open WebUI data is stored in `./backend/data`.
- Both services are connected via the `genai-network` Docker network.
## Executing Ollam commands
- List ollama models
```shell
docker compose exec -it ollama ollama list
```
- Installing a new model
```shell
docker compose exec -it ollama ollama pull ${MODEL_NAME}
```
## References
- https://medium.com/@srpillai/how-to-run-ollama-locally-on-gpu-with-docker-a1ebabe451e0