https://github.com/tankibaj/ollama-demo
Experimenting with self-hosted LLM models using Ollama
https://github.com/tankibaj/ollama-demo
Last synced: 5 months ago
JSON representation
Experimenting with self-hosted LLM models using Ollama
- Host: GitHub
- URL: https://github.com/tankibaj/ollama-demo
- Owner: tankibaj
- Created: 2024-02-15T20:18:47.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-02-16T00:10:51.000Z (over 1 year ago)
- Last Synced: 2025-02-17T21:36:55.799Z (8 months ago)
- Language: Python
- Homepage:
- Size: 1.58 MB
- Stars: 0
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
## Quick Ollama installation
- Spin up Ollama container
```bash
docker compose up -d
```
- Run Llama2 model inside the container
```bash
docker exec -it ollama ollama run llama2
```
- Run llava (Vision Assistant) model inside the container
```bash
docker exec -it ollama ollama run llava
```
- Run odellama model inside the container
```bash
docker exec -it ollama ollama run codellama
```## Test LLM models
- Install required packages
```bash
pip install -r requirements.txt
```- llama2
```bash
py llama2.py
```
- codellama
```bash
py codellama.py
```- llava (Vision Assistant)
```bash
py vision.py
```