An open API service indexing awesome lists of open source software.

https://github.com/asseukihuh/ai-webapp

Local ai webapp using Ollama, ComfyUi.
https://github.com/asseukihuh/ai-webapp

deepseek-r1 gemma-7b llama3 mistral-7b

Last synced: 14 days ago
JSON representation

Local ai webapp using Ollama, ComfyUi.

Awesome Lists containing this project

README

        

# Turn on your local ai-webapp using Ollama API ! 🤖

#### ⚠️ This is currently for experimental purpose, it can do inappropriate things at times ⚠️

## 📌 Summary

Computer specifications

Functionnalities

Install on Linux

Install on Windows

##

💻 Specs

### Minimal specs :
Operating System: Linux, Mac or Windows

Memory (RAM): 8GB

Processor: A relatively modern CPU (5 years, 4 cores)

GPU: Integrated GPU works but runs slow

### Recommended specs :
Operating System: Linux, Mac or Windows

Memory (RAM): 16GB

Processor: A relatively modern CPU (5 years, 8 cores)

GPU: Dedicated GPU 6GB VRAM minimal (CUDA is the best)

##

📋 Functionnalities

- Respond to a prompt with a continuous context (may me innapropriate because of the model)
- Change model inside the webapp

## Configuration :

###

🐧 Linux

#### 1. Install ollama

```
curl -fsSL https://ollama.com/install.sh | sh
```
#### 2. Run ollama

```
ollama serve
```

#### 3. Choose a model and make sure it runs

You can find models at ollama.com/search.

Once you found your model run it and test some prompts to make sure it runs.

```
ollama run
```

#### 4. Clone this repository in your computer

```
git clone https://github.com/asseukihuh/ai-webapp
```

#### 5. Host a server in your computer

```
python -m http.server 8000
```

#### 6. Test your local ai-webapp

Go to the adress localhost:8000 on your navigator.

Find the repository where index.html is located.

And there is your local ai-webapp.

###

🪟 Windows

#### 1. Install ollama

Install ollama via ollama.com

#### 2. Run ollama

```
ollama serve
```

#### 3. Choose a model and make sure it runs

You can find models at ollama.com/search.

Once you found your model run it and test some prompts to make sure it runs.

```
ollama run
```

#### 4. Clone this repository in your computer

```
git clone https://github.com/asseukihuh/ai-webapp
```

#### 5. Host a server in your computer

```
python -m http.server 8000
```

#### 6. Test your local ai-webapp

Go to the adress localhost:8000 on your navigator. (you can chose the path where the localhost is via --directory "")

Find the repository where index.html is located.

And there is your local ai-webapp.