An open API service indexing awesome lists of open source software.

https://github.com/wazeerc/illama

A UI built with Nuxt for interacting with anny Ollama models locally.
https://github.com/wazeerc/illama

ai aichatbot nuxt ollama

Last synced: 8 months ago
JSON representation

A UI built with Nuxt for interacting with anny Ollama models locally.

Awesome Lists containing this project

README

          

# iLlama - A web app built with Nuxt for Llama Models

A web interface built with Nuxt for interacting with any OLlama language model locally.

## Prerequisites

- Docker
- Docker Compose
- Node.js (for local development)
- pnpm
- oLlama (for local development)

## Environment Setup

1. Create a `.env` file in the root directory:
```bash
cp .env.example .env
```

2. Configure your model in the `.env` file:
```bash
# Model Configuration
NUXT_PUBLIC_LLAMA_MODEL="deepseek-r1:1.5b"
LLAMA_MODEL="deepseek-r1:1.5b"
```

Note: Both environment variables should typically use the same model name.

## Docker Setup

1. Clone the repository:
```bash
git clone
cd iLlama
```

2. Start the application using Docker Compose:
```bash
docker compose up -d
```

3. Stop the application using Docker Compose:
```bash
docker compose down
```

The application will be available at `http://localhost:3000`

## Local Development Setup

1. Install dependencies and start dev server:
```bash
pnpm install
pnpm dev
```

2. Install [oLlama](https://ollama.com)

3. Pull and run your desired model:
```bash
ollama run deepseek-r1:1.5b # Replace 'deepseek-r1:1.5b' with your actual model name
```

The development server will be available at `http://localhost:3000`

## Available Models

You can use any model supported by oLlama. Some popular options include:
- deepseek
- codellama
- mistral
- llama2

The model name in your `.env` file must match exactly with the model name from oLlama's library.
For example: `deepseek-r1:1.5b` not just `deepseek`.

Check [oLlama's model library](https://ollama.com/library) for more options.

## Environment Variables

- `NUXT_PUBLIC_LLAMA_MODEL`: Model name used by the frontend
- `LLAMA_MODEL`: Model name for Docker container to pull

Both variables should typically match and use the exact model name from oLlama's library.