https://github.com/fr3ddy404/flask-webui
A simple web interface with Markdown support built using Flask, designed to interact with Ollama models.
https://github.com/fr3ddy404/flask-webui
flask langchain ollama-webui
Last synced: 6 months ago
JSON representation
A simple web interface with Markdown support built using Flask, designed to interact with Ollama models.
- Host: GitHub
- URL: https://github.com/fr3ddy404/flask-webui
- Owner: Fr3ddy404
- License: mit
- Created: 2024-11-04T12:42:51.000Z (11 months ago)
- Default Branch: main
- Last Pushed: 2024-11-04T13:06:07.000Z (11 months ago)
- Last Synced: 2025-02-14T09:58:24.141Z (8 months ago)
- Topics: flask, langchain, ollama-webui
- Language: Python
- Homepage:
- Size: 647 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Flask WebUI
A simple web interface with Markdown support built using Flask, designed to interact with Ollama models.
## Table of Contents
1. [Installation](#installation)
2. [Features](#features)
3. [Usage](#usage)
4. [License](#license)## Features
- **💬 LLM Chat:** Chat with your Ollama instance.
- **📖 Markdown Support:** Chat messages are rendered in markdown.
- **📚 Multiple Conversations:** Have multiple conversation.
- **📜 Chat History:** Chat history is saved locally.
- **✍️ Custom Prompts:** Create and save custom prompts for you models.
- **⏩ Message Stream:** Messages are streamed by the llm.## Installation
### Using pip
Use the package manager [pip](https://pip.pypa.io/en/stable/) to install the dependencies.```bash
pip install -r requirements.txt
```### Using Nix
1. Follow the [Nix installation instructions](https://nixos.org/download/) to set up `nix`.
2. Enter the development shell from the flake.nix.
```bash
nix develop
```## Usage
To run the application, navigate to the root directory and execute:
> [!WARNING]
> This command uses the Flask server in debug mode.```bash
FLASK_APP=app/views FLASK_ENV=development flask --debug run
```### Example

### Select Models
By default a demo LLM is used. If you want to use your own models with Ollama you have to edit the app/app.py file.
```py
# set the model name to your model or leave it empty for the demo
MODEL_NAME = "" # "llama3.2:3b-instruct-q5_K_M"
```### Markdown
You can use markdown commands in your messages.

Answers by the model in markdown are rendered.
### Spoilers

Parts of the messages enclosed in `[` and `]` are hidden and can be made visble while hovering over it or by clicking on it.
## License
This project is licensed under the [MIT License](https://choosealicense.com/licenses/mit/). Feel free to modify and use it as you wish!