https://github.com/docker/hello-genai
Very simple GenAI application to try the Docker Model Runner
https://github.com/docker/hello-genai
Last synced: 9 months ago
JSON representation
Very simple GenAI application to try the Docker Model Runner
- Host: GitHub
- URL: https://github.com/docker/hello-genai
- Owner: docker
- License: apache-2.0
- Created: 2025-03-28T09:49:19.000Z (10 months ago)
- Default Branch: main
- Last Pushed: 2025-03-28T10:49:51.000Z (10 months ago)
- Last Synced: 2025-04-06T17:51:38.069Z (9 months ago)
- Language: HTML
- Size: 15.6 KB
- Stars: 8
- Watchers: 4
- Forks: 5
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# hello-genai
A simple chatbot web application built in Go, Python and Node.js that connects to a local LLM service (llama.cpp) to provide AI-powered responses.
## Environment Variables
The application uses the following environment variables defined in the `.env` file:
- `LLM_BASE_URL`: The base URL of the LLM API
- `LLM_MODEL_NAME`: The model name to use
To change these settings, simply edit the `.env` file in the root directory of the project.
## Quick Start
1. Clone the repository:
```bash
git clone https://github.com/docker/hello-genai
cd hello-genai
```
2. Run the application using the script:
```bash
./run.sh
```
3. Open your browser and visit the following links:
http://localhost:8080 for the GenAI Application in Go
http://localhost:8081 for the GenAI Application in Python
http://localhost:8082 for the GenAI Application in Node
## Requirements
- macOS (recent version)
- Either:
- Docker and Docker Compose (preferred)
- Go 1.21 or later
- Local LLM server
If you're using a different LLM server configuration, you may need to modify the`.env` file.