https://github.com/ldcmleo/webui-boilerplate
A straightforward Docker project designed to simplify the setup of a Web User Interface (WebUI) alongside an Ollama Service. Using Docker Compose, this project streamlines the process of launching both services, ensuring seamless integration and providing a flexible, scalable environment suitable for various development and testing scenarios.
https://github.com/ldcmleo/webui-boilerplate
docker
Last synced: 8 months ago
JSON representation
A straightforward Docker project designed to simplify the setup of a Web User Interface (WebUI) alongside an Ollama Service. Using Docker Compose, this project streamlines the process of launching both services, ensuring seamless integration and providing a flexible, scalable environment suitable for various development and testing scenarios.
- Host: GitHub
- URL: https://github.com/ldcmleo/webui-boilerplate
- Owner: ldcmleo
- License: mit
- Created: 2024-11-04T04:56:34.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-11-06T18:43:31.000Z (about 1 year ago)
- Last Synced: 2025-01-22T15:43:37.388Z (10 months ago)
- Topics: docker
- Homepage:
- Size: 2.93 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# WebUI-Boilerplate
A simple Docker project with Docker Compose to set up a WebUI and Ollama Service environment.
## Requiriments/Requisitos
- Docker Engine or Docker Desktop installed
- Docker compose plugin
## Usage
Clone this repository in a specific folder
```bash
git clone git@github.com:ldcmleo/WebUI-Boilerplate.git
```
This will create a folder called WebUI-Boilerplate. Navigate into this folder and start the services:
```bash
cd WebUI-Boilerplate
docker compose up -d
```
## Creating a Model for Ollama
This service uses Ollama to create models, providing an environment similar to ChatGPT. To create a new model with your Ollama service, follow these steps:
### Using the Ollama container
To access the Ollama container, run the following command:
```bash
docker exec -it ollama bash
```
Once inside the container, create the new model using an Ollama command, for example:
```bash
ollama run llama3.2
```
You can find available models for Ollama here:
