https://github.com/chanulee/coreollama
Locally own and run your LLM - easy, simple, lightweight
https://github.com/chanulee/coreollama
ollama ollama-client ollama-gui
Last synced: 3 months ago
JSON representation
Locally own and run your LLM - easy, simple, lightweight
- Host: GitHub
- URL: https://github.com/chanulee/coreollama
- Owner: chanulee
- License: mit
- Created: 2024-11-17T23:48:07.000Z (11 months ago)
- Default Branch: main
- Last Pushed: 2025-02-28T02:55:58.000Z (7 months ago)
- Last Synced: 2025-03-25T15:53:59.477Z (7 months ago)
- Topics: ollama, ollama-client, ollama-gui
- Language: JavaScript
- Homepage:
- Size: 34.2 KB
- Stars: 49
- Watchers: 1
- Forks: 6
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# coreOllama
The most easy-to-use, simple and lightweight interface to run your local LLM, for everyone.## Features
- Generate
- model selection
- temperature
- Image input for [llama3.2-vision:latest](https://ollama.com/library/llama3.2-vision)
- Model Management
- View and delete models
- Pull new model
- Local server status
- Dark mode
- Include Context: Full or selection
- Clear chat history## Versions
- 0-basic: Basic proof of concept of ollama-gui
- chat: main project
#### Advanced Apps
- [persona-studio](https://github.com/chanulee/persona-studio): Build and manage your own personas
- [everychat](https://github.com/chanulee/everychat): your AI chat multiverse## Beginner's guide
1. Ollama setup - install ollama app for mac (You can download model or just proceed and use gui)
2. Quit the app (check on your status bar).
3. Open terminal and enter `ollama serve`. Keep that terminal window open.
4. Check http://localhost:11434/, it should say "Ollama is running".
5. Download the repo and open `web/chat/index.html`## After your first time
You can use this simple shortcut that 1. quits running ollama and 2. runs the ollama server
https://www.icloud.com/shortcuts/effd307fdf9b4edba55f16337d1419f2