https://github.com/pratheeshrussell/flowise-ollama-demo
docker files to run flowise and ollama
https://github.com/pratheeshrussell/flowise-ollama-demo
Last synced: 3 months ago
JSON representation
docker files to run flowise and ollama
- Host: GitHub
- URL: https://github.com/pratheeshrussell/flowise-ollama-demo
- Owner: pratheeshrussell
- Created: 2024-01-19T09:11:23.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2025-02-01T09:33:26.000Z (4 months ago)
- Last Synced: 2025-02-01T10:25:19.469Z (4 months ago)
- Language: Dockerfile
- Size: 9.77 KB
- Stars: 0
- Watchers: 1
- Forks: 2
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Flowise - ollama demo setup
A Flowise-ollama demo## Dockerfiles
### ollama
Modify the dockerfile in ollama folder. Currently pulling gemma:2b model and all-minilm model during build time.
NOTE that this will pull the gemma:2b models during build time itself### flowise
Image pulled directly from dockerhub. The dockerfile in flowise folder is not used. You can use it if you need to make any customizations.### Qdrant
Image pulled directly from dockerhub.
comment it in docker-compose if not needed## Start
```
docker compose up -d
```UI will be available at http://localhost:4505/
## Port and host mapping
Only the port for flowise is accessible from outside the docker network. This is currently set to 4505.To access ollama from flowise container use the hostname **ollama** ie., *http://ollama:11434*
similarly, the qdrant container can be accessed with hostname **qdrant**, example, *qdrant:6333*
qdrant dashboard can be accessed at **http://localhost:6333/dashboard#**## Adding documents with data
Add your data to **custom_data** folder this will be mapped to **/var/custom_data** in flowise container. You can load it with **Folders with Files** document loader in flowise by specifying **/var/custom_data** or subfolders inside that## Ollama Model files
The folder **ollama_models** is mapped to **/root/custom_models** in the ollama container. You can add your models and model files there however you would have to exec inside the container to load it into ollama for now.
```
sudo docker exec -it ollama /bin/bash
```
Read more about loading a model from modelfile [here](https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md)