Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/ryan-yang125/chatllm-web
🗣️ Chat with LLM like Vicuna totally in your browser with WebGPU, safely, privately, and with no server. Powered by web llm.
https://github.com/ryan-yang125/chatllm-web
chatgpt deep-learning llm nextjs pwa react tvm vicuna webgpu webml
Last synced: 26 days ago
JSON representation
🗣️ Chat with LLM like Vicuna totally in your browser with WebGPU, safely, privately, and with no server. Powered by web llm.
- Host: GitHub
- URL: https://github.com/ryan-yang125/chatllm-web
- Owner: Ryan-yang125
- License: mit
- Created: 2023-04-23T05:07:57.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-08-08T03:56:09.000Z (5 months ago)
- Last Synced: 2024-11-05T03:37:09.895Z (about 2 months ago)
- Topics: chatgpt, deep-learning, llm, nextjs, pwa, react, tvm, vicuna, webgpu, webml
- Language: JavaScript
- Homepage: https://chat-llm-web.vercel.app
- Size: 5.03 MB
- Stars: 623
- Watchers: 12
- Forks: 49
- Open Issues: 4
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-ChatGPT-repositories - ChatLLM-Web - 🗣️ Chat with LLM like Vicuna totally in your browser with WebGPU, safely, privately, and with no server. Powered by web llm. (Browser-extensions)
README
ChatLLM Web
🚀 Check the AI search engine https://discovai.io, discover top ai tools that best match your need
English / [简体中文](./docs/README_CN.md) / [日本語](./docs/README_JA.md)
🗣️ Chat with LLM like Vicuna totally in your browser with WebGPU, safely, privately, and with no server. Powered By [web-llm](https://github.com/mlc-ai/web-llm).
[Try it now](https://chat-llm-web.vercel.app)
![cover](./docs/images/cover.png)
## Features
- 🤖 Everything runs inside the browser with **no server support** and is **accelerated with WebGPU**.
- ⚙️ Model runs in a web worker, ensuring that it doesn't block the user interface and providing a seamless experience.
- 🚀 Easy to deploy for free with one-click on Vercel in under 1 minute, then you get your own ChatLLM Web.
- 💾 Model caching is supported, so you only need to download the model once.
- 💬 Multi-conversation chat, with all data stored locally in the browser for privacy.
- 📝 Markdown and streaming response support: math, code highlighting, etc.
- 🎨 responsive and well-designed UI, including dark mode.
- 💻 PWA supported, download it and run totally offline.
## Instructions
- 🌐 To use this app, you need a browser that supports WebGPU, such as Chrome 113 or Chrome Canary. Chrome versions ≤ 112 are not supported.
- 💻 You will need a GPU with about 6.4GB of memory. If your GPU has less memory, the app will still run, but the response time will be slower.
- 📥 The first time you use the app, you will need to download the model. For the Vicuna-7b model that we are currently using, the download size is about 4GB. After the initial download, the model will be loaded from the browser cache for faster usage.
- ℹ️ For more details, please visit [mlc.ai/web-llm](https://mlc.ai/web-llm/)
## Roadmap
- [✅] LLM: using web worker to create an LLM instance and generate answers.
- [✅] Conversations: Multi-conversation support is available.
- [✅] PWA
- [] Settings:
- ui: dark/light theme
- device:
- gpu device choose
- cache usage and manage
- model:
- support multi models: vicuna-7b✅ RedPajama-INCITE-Chat-3B []
- params config: temperature, max-length, etc.
- export & import model## Deploy to Vercel
1. Click
[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FRyan-yang125%2FChatLLM-Web&project-name=chat-llm-web&repository-name=ChatLLM-Web), follow the instructions, and finish in just 1 minute.
2. Enjoy it 😊## Development
```shell
git clone https://github.com/Ryan-yang125/ChatLLM-Web.git
cd ChatLLM-Web
npm i
npm run dev
```## Screenshots
![Home](./docs/images/home.png)
![More](./docs/images/mobile.png)
## 🌟 History
[![Star History Chart](https://api.star-history.com/svg?repos=Ryan-yang125/ChatLLM-Web&type=Date)](https://star-history.com/#Ryan-yang125/ChatLLM-Web&Date)## LICENSE
[MIT](./LICENSE)