https://github.com/asseukihuh/ai-webapp
Local ai webapp using Ollama, ComfyUi.
https://github.com/asseukihuh/ai-webapp
deepseek-r1 gemma-7b llama3 mistral-7b
Last synced: 14 days ago
JSON representation
Local ai webapp using Ollama, ComfyUi.
- Host: GitHub
- URL: https://github.com/asseukihuh/ai-webapp
- Owner: asseukihuh
- License: mit
- Created: 2025-02-16T14:31:15.000Z (3 months ago)
- Default Branch: main
- Last Pushed: 2025-03-21T17:47:10.000Z (2 months ago)
- Last Synced: 2025-03-21T18:34:11.622Z (2 months ago)
- Topics: deepseek-r1, gemma-7b, llama3, mistral-7b
- Language: CSS
- Homepage:
- Size: 89.8 KB
- Stars: 2
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Turn on your local ai-webapp using Ollama API ! 🤖
#### ⚠️ This is currently for experimental purpose, it can do inappropriate things at times ⚠️
## 📌 Summary
Computer specifications
Functionnalities
Install on Linux
Install on Windows
##💻 Specs
### Minimal specs :
Operating System: Linux, Mac or Windows
Memory (RAM): 8GB
Processor: A relatively modern CPU (5 years, 4 cores)
GPU: Integrated GPU works but runs slow### Recommended specs :
Operating System: Linux, Mac or Windows
Memory (RAM): 16GB
Processor: A relatively modern CPU (5 years, 8 cores)
GPU: Dedicated GPU 6GB VRAM minimal (CUDA is the best)##
📋 Functionnalities
- Respond to a prompt with a continuous context (may me innapropriate because of the model)
- Change model inside the webapp## Configuration :
###
🐧 Linux
#### 1. Install ollama
```
curl -fsSL https://ollama.com/install.sh | sh
```
#### 2. Run ollama
```
ollama serve
```#### 3. Choose a model and make sure it runs
You can find models at ollama.com/search.
Once you found your model run it and test some prompts to make sure it runs.
```
ollama run
```#### 4. Clone this repository in your computer
```
git clone https://github.com/asseukihuh/ai-webapp
```#### 5. Host a server in your computer
```
python -m http.server 8000
```#### 6. Test your local ai-webapp
Go to the adress localhost:8000 on your navigator.
Find the repository where index.html is located.
And there is your local ai-webapp.
###
🪟 Windows
#### 1. Install ollama
Install ollama via ollama.com#### 2. Run ollama
```
ollama serve
```#### 3. Choose a model and make sure it runs
You can find models at ollama.com/search.
Once you found your model run it and test some prompts to make sure it runs.
```
ollama run
```#### 4. Clone this repository in your computer
```
git clone https://github.com/asseukihuh/ai-webapp
```#### 5. Host a server in your computer
```
python -m http.server 8000
```#### 6. Test your local ai-webapp
Go to the adress localhost:8000 on your navigator. (you can chose the path where the localhost is via --directory "")
Find the repository where index.html is located.
And there is your local ai-webapp.