https://github.com/fzn0x/discord-deepseek-r1-bot
Discord Bot with Deepseek R1 (Runs locally or with API)
https://github.com/fzn0x/discord-deepseek-r1-bot
bot deepseek deepseek-chat deepseek-r1 discord discord-bot typescript
Last synced: 3 months ago
JSON representation
Discord Bot with Deepseek R1 (Runs locally or with API)
- Host: GitHub
- URL: https://github.com/fzn0x/discord-deepseek-r1-bot
- Owner: fzn0x
- License: mit
- Created: 2025-01-24T15:04:44.000Z (3 months ago)
- Default Branch: main
- Last Pushed: 2025-01-25T09:38:27.000Z (3 months ago)
- Last Synced: 2025-01-25T10:18:47.909Z (3 months ago)
- Topics: bot, deepseek, deepseek-chat, deepseek-r1, discord, discord-bot, typescript
- Language: TypeScript
- Homepage:
- Size: 22.5 KB
- Stars: 7
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# 🤖 Discord Bot for Deepseek-R1 (Local and API run) 🐋
This is Discord Bot for Deepseek R1 with automically fallback to local fetching once the API usage is limited. 🐋
## Requirements
- Brain
- Docker
- Discordjs
- TypeScript
- Ollama (just use docker, why you so fall in love with direct binary execution)
- You can also install by `flake.nix` with Nix
- Deepseek R1 API## Local Serving for Deepseek-R1 (since you don't know) 🍽️
### Install Ollama
```sh
docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
# or
git clone https://github.com/fzn0x/discord-deepseek-r1-bot
cd discord-deepseek-r1-bot
nix develop # Ensure flakes are enabled in /etc/nix/nix.conf
# if you are using wsl
# export LD_LIBRARY_PATH="/usr/lib/wsl/lib:$LD_LIBRARY_PATH"
ollama pull deepseek-r1:1.5b
ollama serve & # Ensure cuda is installed on your machine
```### Run Deepseek R1 Model with Ollama (For Docker Installation)
I'm using 1.5b, you can choose other models here: https://ollama.com/library/deepseek-r1:1.5b
```sh
docker exec -it ollama ollama run deepseek-r1:1.5b
```### CURL your local API
```sh
curl http://localhost:11434/api/generate -d '{
"model": "deepseek-r1:1.5b",
"prompt": "Why is the sky blue?"
}'
```**You can use this step on your VPS. If you want cheap servers, try something like Contabo (I'm not promoting them).**
Optional Task: There is a clean.py file if you accidentally run out of memory when running models with _vLLM_.
## Credits
- God
- Deepseek
- Me
- Internet
- Founder of electricity
- Github
- Your Mom
- etc## License
This project licensed in [MIT License](./LICENSE)
## Pro Tips 💡
Adds a smart contract development, there you go another shitcoin project.