https://github.com/avestura/my-local-ai-setup
Send a total of zero network requests or API calls to external services to use AI
https://github.com/avestura/my-local-ai-setup
ai kokoro localhost ollama open-webui self-hosted stable-diffusion stable-diffusion-webui whisper
Last synced: about 2 months ago
JSON representation
Send a total of zero network requests or API calls to external services to use AI
- Host: GitHub
- URL: https://github.com/avestura/my-local-ai-setup
- Owner: avestura
- Created: 2025-04-12T20:35:22.000Z (6 months ago)
- Default Branch: master
- Last Pushed: 2025-04-12T20:39:23.000Z (6 months ago)
- Last Synced: 2025-04-13T15:16:30.669Z (6 months ago)
- Topics: ai, kokoro, localhost, ollama, open-webui, self-hosted, stable-diffusion, stable-diffusion-webui, whisper
- Language: PowerShell
- Homepage:
- Size: 4.63 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: readme.md
Awesome Lists containing this project
README
# My Local AI Setup
**Eventhing you see in the Video is running locally on a RTX 3060 Laptop GPU:**
https://github.com/user-attachments/assets/ec261500-7c0e-4c56-af99-a6aed08421d4
### Models to run:
- UI Dashboard: Open WebUI (as docker container) talking to Ollama on the host machine
- 🐋 Runs via docker container
- Speech-to-Text: openai/whisper
- 🐋 Runs via docker container
- Text-to-Speech: hexgrad/Kokoro-82M
- 🐋 Runs via docker container (you should set it up in open-webui)
- Ollama models: gemma3:4b (multimodal text and vision), qwen2.5-coder (text only, code-specific)
- Use `install-ollama.ps` and `install-ollama-models.psq`
- Image generation: stabilityai/stable-diffusion-2-1
- Follow the install guide: [AUTOMATIC1111/stable-diffusion-webui :: Installation on Windows 10/11 with NVidia-GPUs using release package](https://github.com/AUTOMATIC1111/stable-diffusion-webui#installation-on-windows-1011-with-nvidia-gpus-using-release-package)