Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/entelecheia/anything-llm-container
Dockerized Anything LLM
https://github.com/entelecheia/anything-llm-container
Last synced: 17 days ago
JSON representation
Dockerized Anything LLM
- Host: GitHub
- URL: https://github.com/entelecheia/anything-llm-container
- Owner: entelecheia
- Created: 2024-02-15T19:04:29.000Z (9 months ago)
- Default Branch: main
- Last Pushed: 2024-03-01T12:05:35.000Z (9 months ago)
- Last Synced: 2024-03-02T03:32:16.095Z (9 months ago)
- Language: Shell
- Homepage:
- Size: 10.7 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# How to use Dockerized Anything LLM
Use the Dockerized version of AnythingLLM for a much faster and complete startup of AnythingLLM.
### Minimum Requirements
> [!TIP]
> Running AnythingLLM on AWS/GCP/Azure?
> You should aim for at least 2GB of RAM. Disk storage is proportional to however much data
> you will be storing (documents, vectors, models, etc). Minimum 10GB recommended.- `docker` installed on your machine
- `yarn` and `node` on your machine
- access to an LLM running locally or remotely\*AnythingLLM by default uses a built-in vector database powered by [LanceDB](https://github.com/lancedb/lancedb)
\*AnythingLLM by default embeds text on instance privately [Learn More](../server/storage/models/README.md)
## Recommend way to run dockerized AnythingLLM!
> [!IMPORTANT]
> If you are running another service on localhost like Chroma, LocalAi, or LMStudio
> you will need to use http://host.docker.internal:xxxx to access the service from within
> the docker container using AnythingLLM as `localhost:xxxx` will not resolve for the host system.
>
> **Requires** Docker v18.03+ on Win/Mac and 20.10+ on Linux/Ubuntu for host.docker.internal to resolve!
>
> _Linux_: add `--add-host=host.docker.internal:host-gateway` to docker run command for this to resolve.
>
> eg: Chroma host URL running on localhost:8000 on host machine needs to be http://host.docker.internal:8000
> when used in AnythingLLM.> [!TIP]
> It is best to mount the containers storage volume to a folder on your host machine
> so that you can pull in future updates without deleting your existing data!Pull in the latest image from docker. Supports both `amd64` and `arm64` CPU architectures.
## Use Docker Compose to run AnythingLLM
```bash
make docker-up-app-detach
```Go to `http://localhost:13031` and you are now using AnythingLLM! All your data and progress will persist between
container rebuilds or pulls from Docker Hub.## How to use the user interface
- To access the full application, visit `http://localhost:13031` in your browser.
## About UID and GID in the ENV
- The UID and GID are set to 1000 by default. This is the default user in the Docker container and on most host operating systems. If there is a mismatch between your host user UID and GID and what is set in the `.env` file, you may experience permission issues.
## ⚠️ Vector DB support ⚠️
Out of the box, all vector databases are supported. Any vector databases requiring special configuration are listed below.
### Using local ChromaDB with Dockerized AnythingLLM
- Ensure in your `./docker/.env` file that you have
```
#./docker/.env
...other configsVECTOR_DB="chroma"
CHROMA_ENDPOINT='http://host.docker.internal:8000' # Allow docker to look on host port, not container.
# CHROMA_API_HEADER="X-Api-Key" // If you have an Auth middleware on your instance.
# CHROMA_API_KEY="sk-123abc"...other configs
```
### Having issues with Ollama?
If you are getting errors like `llama:streaming - could not stream chat. Error: connect ECONNREFUSED 172.17.0.1:11434` then visit the README below.
[Fix common issues with Ollama](../server/utils/AiProviders/ollama/README.md)
### Still not working?
[Ask for help on Discord](https://discord.gg/6UyHPeGZAC)