Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/chanulee/coreollama
Locally run your own LLM - easy, simple, lightweight
https://github.com/chanulee/coreollama
ollama ollama-gui
Last synced: 19 days ago
JSON representation
Locally run your own LLM - easy, simple, lightweight
- Host: GitHub
- URL: https://github.com/chanulee/coreollama
- Owner: chanulee
- License: mit
- Created: 2024-11-17T23:48:07.000Z (about 1 month ago)
- Default Branch: main
- Last Pushed: 2024-11-22T22:50:11.000Z (30 days ago)
- Last Synced: 2024-12-03T17:15:02.274Z (19 days ago)
- Topics: ollama, ollama-gui
- Language: JavaScript
- Homepage:
- Size: 28.3 KB
- Stars: 36
- Watchers: 1
- Forks: 4
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# coreOllama
The most easy-to-use, simple and lightweight interface to run your local LLM, for everyone.## Features
- Generate
- model selection
- temperature
- Image input for [llama3.2-vision:latest](https://ollama.com/library/llama3.2-vision)
- Model Management
- View and delete models
- Pull new model
- Local server status
- Dark mode
- Include Context: Full or selection
- Clear chat history## Versions
- 0-basic: Basic proof of concept of ollama-gui
- chat: main project
#### Advanced Apps
- [persona-studio](https://github.com/chanulee/persona-studio): Build and manage your own personas
- [everychat](https://github.com/chanulee/everychat): your AI chat multiverse## Beginner's guide
1. Ollama setup - install ollama app for mac (You can download model or just proceed and use gui)
2. Quit the app (check on your status bar).
3. Open terminal and enter `ollama serve`. Keep that terminal window open.
4. Check http://localhost:11434/, it should say "Ollama is running".
5. Download the repo and open `web/chat/index.html`