Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/danielclough/fireside-chat
An LLM interface (chat bot) implemented in pure Rust using HuggingFace/Candle over Axum Websockets, an SQLite Database, and a Leptos (Wasm) frontend packaged with Tauri!
https://github.com/danielclough/fireside-chat
artificial-intelligence axum bot candle chat chatbot huggingface leptos llm mistral-7b rust sqlite tauri wasm websockets
Last synced: 3 months ago
JSON representation
An LLM interface (chat bot) implemented in pure Rust using HuggingFace/Candle over Axum Websockets, an SQLite Database, and a Leptos (Wasm) frontend packaged with Tauri!
- Host: GitHub
- URL: https://github.com/danielclough/fireside-chat
- Owner: danielclough
- License: agpl-3.0
- Created: 2023-11-21T09:31:24.000Z (12 months ago)
- Default Branch: main
- Last Pushed: 2024-08-07T01:45:09.000Z (3 months ago)
- Last Synced: 2024-08-07T04:05:15.900Z (3 months ago)
- Topics: artificial-intelligence, axum, bot, candle, chat, chatbot, huggingface, leptos, llm, mistral-7b, rust, sqlite, tauri, wasm, websockets
- Language: Rust
- Homepage:
- Size: 1.66 MB
- Stars: 110
- Watchers: 4
- Forks: 8
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Fireside Chat
(prev. "Candle Chat")
An LLM interface implemented in pure Rust using [HuggingFace/Candle](https://github.com/huggingface/candle/) over [Axum](https://github.com/tokio-rs/axum) Websockets, an [SQLite](https://https://sqlite.org/index.html) Database, and a [Leptos](https://www.leptos.dev/) (Wasm) frontend packaged with [Tauri](https://tauri.app)!
Watch the introduction video:
[![Watch the video](https://img.youtube.com/vi/Jw1E3LnNG0o/0.jpg)](https://youtu.be/Jw1E3LnNG0o)## Goals
This project is designed for single and multi-user chat with many Large Language Models (LLMs).
### Features
- Local or Remote Inference Backend
- Local SQLite Database## Setup / Operation
You can configure your model and default inference settings by putting files in your `Config Directory`.
This is automatically configured when you choose a model in the frontend, but you can manually add models if you like.Example:
```yaml
# config_model.yaml
repo_id: DanielClough/Candle_Puffin-Phi-v2
q_lvl: q2k
revision: main
tokenizer_file: null
weight_file: null
quantized: true
cpu: false
use_flash_attn: false
template: ShareGPT
``````yaml
# config_inference.yaml
temperature:
top_p:
seed: 299792458
sample_len: 150
repeat_penalty: 1.3
repeat_last_n: 150
load_context: false
role:
```If `load_context: true` then you can add (small) in `/fireside-chat/context/`.
Large files may cause Out Of Memory errors.### Linux
`Config Directory` is `$XDG_CONFIG_HOME` or `$HOME/.config`
### macOS
`Config Directory` is `$HOME/Library/Application Support`
### Windows
`Config Directory` is `{FOLDERID_RoamingAppData}`
## Development
You can compile with environment variable the `FIRESIDE_BACKEND_URL`, and `FIRESIDE_DATABASE_URL` to call a server other than `localhost`.
This can be configured in `tauri.conf.json`, or in your system environment.
```sh
# eg. for Linux
export FIRESIDE_BACKEND_URL=192.168.1.6 && trunk serve
```## Limitations
- I am not testing in Mac or Windows environments, so while everything may work fine I could use some help in order to ensure correctness on those systems.