An open API service indexing awesome lists of open source software.

https://github.com/aisingapore/sealion-chainlit-ollama

Explore the AI Singapore SEA-LION model with a chatbot 🤖 built with Chainlit and Ollama
https://github.com/aisingapore/sealion-chainlit-ollama

chainlit chatbot multilingual ollama

Last synced: 10 days ago
JSON representation

Explore the AI Singapore SEA-LION model with a chatbot 🤖 built with Chainlit and Ollama

Awesome Lists containing this project

README

          

# Explore the AI Singapore SEA-LION model with Chainlit and Ollama

## Overview
- [Getting Started](#getting-started)
- [Getting Started with Docker](#getting-started-with-docker)
- [Default Model](#default-model)
- [Customisations](#customisations)
- [Acknowledgements](#acknowledgements)

> [!NOTE]
> This project is designed for local environments. Do not run it in production.

## Meet the Cast
- [AI Singapore SEA-LION](https://github.com/aisingapore/sealion)
- Model: https://ollama.com/aisingapore/Gemma-SEA-LION-v3-9B-IT:q4_k_m
- [Chainlit](https://chainlit.io/)
- [Ollama](https://ollama.com/)

## Getting Started
### Prerequisites
- Python 3.8 or newer
- [Ollama](https://ollama.com/download)

### Run the app
- Install [Ollama](https://ollama.com/download), if it is not already installed.
- Pull the model.
```bash
ollama pull aisingapore/Gemma-SEA-LION-v3-9B-IT:q4_k_m
```
- In the project directory, create a virtual environment.
```bash
python -m venv venv
```
- Activate the virtual environment.
```bash
source venv/bin/activate
```
- Copy ```.env``` and update the values, if necessary:
```bash
cp .env.example .env
```
- Install the packages.
```
pip install -r requirements.txt
```
- Run the app.
```bash
chainlit run src/main.py -w
```
- Navigate to http://localhost:8000 to access the chatbot.
image

## Getting Started with Docker
> [!NOTE]
> At the time of writing, [GPU support in Docker Desktop](https://docs.docker.com/desktop/features/gpu/) is only available on Windows with the WSL2 backend.

### Prerequisites
- [Docker](https://docs.docker.com/engine/install/)
- For the default [model](https://ollama.com/aisingapore/Gemma-SEA-LION-v3-9B-IT:q4_k_m), set the memory limit to 6GB or more.
- If a larger model is used, or if there are other active Docker containers in the environment, increase the memory limit further to take into account their memory requirements.
docker_resources

### Run the app with Docker
- Copy ```.env``` and update the values, if necessary:
```bash
cp .env.example .env
```
- Start the services:
```bash
docker compose up
```
- Pull the SEA-LION model with Ollama:
```bash
docker compose exec ollama ollama pull aisingapore/Gemma-SEA-LION-v3-9B-IT:q4_k_m
```
- Navigate to http://localhost:8000 to access the chatbot.
image

## Default Model
- The default model is [Gemma-SEA-LION-v3-9B-IT:q4_k_m](https://ollama.com/aisingapore/Gemma-SEA-LION-v3-9B-IT:q4_k_m).
- If you would like to test the other models, choose the model in https://ollama.com/aisingapore/Gemma-SEA-LION-v3-9B-IT.
- Check that there is sufficient disk storage and memory. For example, [Gemma-SEA-LION-v3-9B-IT:q8_0](https://ollama.com/aisingapore/Gemma-SEA-LION-v3-9B-IT:q8_0) requires at least 10GB of disk storage and 12GB of available memory in Docker.
- Pull the model with Ollama.
```bash
docker compose exec ollama ollama pull aisingapore/Gemma-SEA-LION-v3-9B-IT:q8_0
```
- Update the model name in `.env`.
```
LLM_MODEL=aisingapore/Gemma-SEA-LION-v3-9B-IT:q8_0
```

## Customisations
- Please feel free to fork this repo and customise it.
- Examples:
- [OAuth](https://docs.chainlit.io/authentication/oauth)
- [Data Persistence](https://docs.chainlit.io/data-persistence/custom#sql-alchemy-data-layer)
- Integrations with [LangChain](https://docs.chainlit.io/integrations/langchain) or other [inference servers](https://docs.chainlit.io/integrations/message-based)

## Acknowledgements
- Kudos to the [AI Singapore Team](https://huggingface.co/aisingapore/Gemma-SEA-LION-v3-9B-IT#the-team) for their good work!