Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/jake83741/vnc-lm
Add LLMs to Discord. Add DeepSeek R-1, Llama 3.3, Gemini, and other models.
https://github.com/jake83741/vnc-lm
deepseek discord discord-bot gemini language-model litellm llama llm ollama
Last synced: 2 days ago
JSON representation
Add LLMs to Discord. Add DeepSeek R-1, Llama 3.3, Gemini, and other models.
- Host: GitHub
- URL: https://github.com/jake83741/vnc-lm
- Owner: jake83741
- License: mpl-2.0
- Created: 2024-08-31T04:15:14.000Z (6 months ago)
- Default Branch: main
- Last Pushed: 2025-02-05T02:01:50.000Z (15 days ago)
- Last Synced: 2025-02-05T03:17:55.338Z (15 days ago)
- Topics: deepseek, discord, discord-bot, gemini, language-model, litellm, llama, llm, ollama
- Language: TypeScript
- Homepage:
- Size: 206 KB
- Stars: 69
- Watchers: 3
- Forks: 4
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- Awesome-Ollama - vnc-m
README
# vnc-lm
### A Discord bot for large language models. Add DeepSeek R-1, Llama 3.3, Gemini, and other models.
Easily change models. Branch conversations to see new paths. Edit prompt messages to improve responses.[Supported API providers](https://docs.litellm.ai/docs/providers)

### Features
#### Model ManagementLoad models using the `/model` command. Configure model behavior by adjusting the `system_prompt` (base instructions), `temperature` (response randomness), and `num_ctx` (context length) parameters. The bot is integrated with [ollama](https://github.com/ollama/ollama), which allows users to manage local models right from Discord.
```shell
# model management examples# loading a model without configuring it
/model model:gemini-exp-1206
# loading a model with system prompt and temperature
/model model:gemini-exp-1206 system_prompt: You are a helpful assistant. temperature: 0.4
# loading an ollama model with num_ctx
/model model:deepseek-r1:8b-llama-distill-fp16 num_ctx:32000
# downloading an ollama model by sending a model tag link
https://ollama.com/library/deepseek-r1:8b-llama-distill-fp16
# removing an ollama model
/model model:deepseek-r1:8b-llama-distill-fp16 remove:True
```A thread will be created once the model loads. To switch models within a thread, use `+` followed by any distinctive part of the model name.
```shell
# model switching examples# switch to deepseek-r1
+ deepseek, +r1
# switch to gemini-exp-1206
+ gemini, + exp, + 1206
# switch to claude-sonnet-3.5
+ claude, + sonnet, + 3.5
```The bot is integrated with [LiteLLM](https://www.litellm.ai/), which provides a unified interface to access leading large language model APIs. This integration also supports OpenAI-compatible APIs, enabling support for open-source LLM projects. Add new models by editing `litellm_config.yaml` in the `vnc-lm/` directory. While LiteLLM starts automatically with the Docker container, the bot can also run with ollama alone if preferred.
#### Message Handling
Messages are automatically paginated and support text files, links, and images (via multi-modal models or OCR based on `.env` settings). Edit prompts to refine responses, with conversations persisting across container restarts in `bot_cache.json`. Create conversation branches by replying `branch` to any message. Hop between different conversation paths while maintaining separate histories.
### Requirements
[Docker](https://www.docker.com/): Docker is a platform designed to help developers build, share, and run container applications. We handle the tedious setup, so you can focus on the code.### Environment Configuration
```shell
# clone the repository or download a recent release
git clone https://github.com/jake83741/vnc-lm.git# enter the directory
cd vnc-lm# rename the env file
mv .env.example .env
``````shell
# configure the below .env fields# Discord bot token
TOKEN=
# administrator Discord user id (necessary for model downloading / removal privileges)
ADMIN=
# require bot mention (default: false)
REQUIRE_MENTION=# turn vision on or off. turning vision off will turn ocr on. (default: false)
USE_VISION=# leave blank to not use ollama
OLLAMAURL=http://host.docker.internal:11434
# example provider api keys
OPENAI_API_KEY=sk-...8YIH
ANTHROPIC_API_KEY=sk-...2HZF
```
[Generating a bot token](https://discordjs.guide/preparations/setting-up-a-bot-application.html)
[Inviting the bot to a server](https://discordjs.guide/preparations/adding-your-bot-to-servers.html)### LiteLLM configuration
```shell
# add models to the litellm_config.yaml
# it is not necessary to include ollama models here
model_list:
- model_name: gpt-3.5-turbo-instruct
litellm_params:
model: openai/gpt-3.5-turbo-instruct
api_key: os.environ/OPENAI_API_KEY
- model_name:
litellm_params:
model:
api_key:
```
[Additional parameters may be required](https://github.com/jake83741/vnc-lm/blob/a902b22c616e6ae2958a54ca230725c358068722/litellm_config.yaml)### Docker Installation
```shell
# build the container with Docker
docker compose up --build --no-color
```
successful build> [!NOTE]
> Send `/help` for instructions on how to use the bot.### Tree Diagram
```shell
.
├── api-connections/
│ ├── base-client.ts # Abstract base class defining common client interface and methods
│ ├── factory.ts # Factory class for instantiating appropriate model clients
│ └── provider/
│ ├── litellm/
│ │ └── client.ts # Client implementation for LiteLLM API integration
│ └── ollama/
│ └── client.ts # Client implementation for Ollama API integration
├── bot.ts # Main bot initialization and event handling setup
├── commands/
│ ├── base.ts # Base command class with shared command functionality
│ ├── handlers.ts # Implementation of individual bot commands
│ └── registry.ts # Command registration and slash command setup
├── managers/
│ ├── cache/
│ │ ├── entrypoint.sh # Cache initialization script
│ │ ├── manager.ts # Cache management implementation
│ │ └── store.ts # Cache storage and persistence
│ └── generation/
│ ├── core.ts # Core message generation logic
│ ├── formatter.ts # Message formatting and pagination
│ └── generator.ts # Stream-based response generation
└── utilities/
├── error-handler.ts # Global error handling
├── index.ts # Central export point for utilities
└── settings.ts # Global settings and configuration
```### Dependencies
```shell
{
"dependencies": {
"@mozilla/readability": "^0.5.0", # Library for extracting readable content from web pages
"axios": "^1.7.2", # HTTP client for making API requests
"discord.js": "^14.15.3", # Discord API wrapper for building Discord bots
"dotenv": "^16.4.5", # Loads environment variables from .env files
"jsdom": "^24.1.3", # DOM implementation for parsing HTML in Node.js
"keyword-extractor": "^0.0.27", # Extracts keywords from text for generating thread names
"sharp": "^0.33.5", # Image processing library for resizing/optimizing images
"tesseract.js": "^5.1.0" # Optical Character Recognition (OCR) for extracting text from images
},
"devDependencies": {
"@types/axios": "^0.14.0",
"@types/dotenv": "^8.2.0",
"@types/jsdom": "^21.1.7",
"@types/node": "^18.15.25",
"typescript": "^5.1.3"
}
}
```### Troubleshooting
#### Context Window Issues
When sending text files to a local model, be sure to set a proportional `num_ctx` value with `/model`.#### Discord API issues
Occasionally the Discord API will throw up errors in the console.```shell
# Discord api error examples
DiscordAPIError[10062]: Unknown interactionDiscordAPIError[40060]: Interaction has already been acknowledged
```The errors usually seem to be related to clicking through pages of an embedded response. The errors are not critical and should not cause the bot to crash.
#### OpenAI-Compatible API Issues
When adding a model to the `litellm_config.yaml` from a service that uses a local API ([text-generation-webui](https://github.com/oobabooga/text-generation-webui) for example), use this example:```shell
# add openai/ prefix to route as OpenAI provider
# add api base, use host.docker.interal:{port}/v1
# api key to send your model. use a placeholder when the service doesn't use api keys
model_list:
- model_name: my-model
litellm_params:
model: openai/
api_base:
api_key: api-key
```
#### LiteLLM Issues
If LiteLLM is exiting in the console log when doing `docker compose up --build --no-color`. Open the `docker-compose.yaml` and revise the following line and run `docker compose up --build --no-color` again to see more descriptive logs.```shell
# original
command: -c "exec litellm --config /app/config.yaml >/dev/null 2>&1"
# revised
command: -c "exec litellm --config /app/config.yaml"
```Most issues will be related to the `litellm_config.yaml` file. Double check your model_list vs the examples shown in the [LiteLLM docs](https://docs.litellm.ai/docs/providers). Some providers require [additional litellm_params](https://github.com/jake83741/vnc-lm/blob/a902b22c616e6ae2958a54ca230725c358068722/litellm_config.yaml).
#### Cache issues
Cache issues are rare and difficult to reproduce but if one does occur, deleting `bot_cache.json` and re-building the bot should correct it.### License
This project is licensed under the MPL-2.0 license.