https://github.com/ryands17/ollama-local
Run ollama models locally on your system
https://github.com/ryands17/ollama-local
docker-com ollama
Last synced: about 21 hours ago
JSON representation
Run ollama models locally on your system
- Host: GitHub
- URL: https://github.com/ryands17/ollama-local
- Owner: ryands17
- Created: 2024-05-03T22:07:39.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2024-06-04T08:13:00.000Z (almost 2 years ago)
- Last Synced: 2025-04-08T10:09:31.223Z (about 1 year ago)
- Topics: docker-com, ollama
- Language: Shell
- Homepage:
- Size: 4.88 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Ollama LLM test
This is a simple starter to run Ollama LLM models on your local machine. I am currently using these to for code assistance and it's been a nice experience so far.
## Requirements
- Curl
- Docker
- Homebrew
## Instructions
- Clone this repo
- Run the install script `./install.sh`. This will install Ollama CLI and run the Open Web UI from where you can interact with any model needed
- Open [http://localhost:3030](http://localhost:3030) and enjoy!
## Other commands
- You can interact with the docker compose file via the `dd` script. For e.g. stopping the container would be `./dd down`
- Installing Ollama models is also easy as shown [here](https://ollama.com/library/deepseek-coder:33b). For e.g. using the CLI, you can do `ollama run deepseek-coder:33b` to install the deepseek model
**Note**: This setup is designed for Mac systems, so please tweak it accordingly to your OS.