{"id":13451209,"url":"https://github.com/huggingface/chat-ui","last_synced_at":"2025-10-19T21:17:22.875Z","repository":{"id":163994751,"uuid":"603085008","full_name":"huggingface/chat-ui","owner":"huggingface","description":"Open source codebase powering the HuggingChat app","archived":false,"fork":false,"pushed_at":"2025-05-06T17:13:55.000Z","size":6620,"stargazers_count":8690,"open_issues_count":333,"forks_count":1304,"subscribers_count":88,"default_branch":"main","last_synced_at":"2025-05-11T03:43:37.854Z","etag":null,"topics":["chatgpt","hacktoberfest","huggingface","llm","svelte","svelte-kit","sveltekit","tailwindcss","typescript"],"latest_commit_sha":null,"homepage":"https://huggingface.co/chat","language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/huggingface.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2023-02-17T15:31:50.000Z","updated_at":"2025-05-10T19:38:02.000Z","dependencies_parsed_at":null,"dependency_job_id":"545c6bf1-c4be-4ea6-b135-6ce7fe65439b","html_url":"https://github.com/huggingface/chat-ui","commit_stats":{"total_commits":1031,"total_committers":109,"mean_commits":9.458715596330276,"dds":0.5683802133850631,"last_synced_commit":"7423bf0a114503890155a49196290cb72fd46c60"},"previous_names":[],"tags_count":15,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/huggingface%2Fchat-ui","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/huggingface%2Fchat-ui/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/huggingface%2Fchat-ui/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/huggingface%2Fchat-ui/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/huggingface","download_url":"https://codeload.github.com/huggingface/chat-ui/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253514553,"owners_count":21920334,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["chatgpt","hacktoberfest","huggingface","llm","svelte","svelte-kit","sveltekit","tailwindcss","typescript"],"created_at":"2024-07-31T07:00:49.862Z","updated_at":"2025-10-19T21:17:22.869Z","avatar_url":"https://github.com/huggingface.png","language":"TypeScript","readme":"# Chat UI\n\n![Chat UI repository thumbnail](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/chat-ui/chat-ui-2026.png)\n\nA chat interface for LLMs. It is a SvelteKit app and it powers the [HuggingChat app on hf.co/chat](https://huggingface.co/chat).\n\n0. [Quickstart](#quickstart)\n1. [Database Options](#database-options)\n2. [Launch](#launch)\n3. [Optional Docker Image](#optional-docker-image)\n4. [Extra parameters](#extra-parameters)\n5. [Building](#building)\n\n\u003e [!NOTE]\n\u003e Chat UI only supports OpenAI-compatible APIs via `OPENAI_BASE_URL` and the `/models` endpoint. Provider-specific integrations (legacy `MODELS` env var, GGUF discovery, embeddings, web-search helpers, etc.) are removed, but any service that speaks the OpenAI protocol (llama.cpp server, Ollama, OpenRouter, etc. will work by default).\n\n\u003e [!NOTE]\n\u003e The old version is still available on the [legacy branch](https://github.com/huggingface/chat-ui/tree/legacy)\n\n## Quickstart\n\nChat UI speaks to OpenAI-compatible APIs only. The fastest way to get running is with the Hugging Face Inference Providers router plus your personal Hugging Face access token.\n\n**Step 1 – Create `.env.local`:**\n\n```env\nOPENAI_BASE_URL=https://router.huggingface.co/v1\nOPENAI_API_KEY=hf_************************\n# Fill in once you pick a database option below\nMONGODB_URL=\n```\n\n`OPENAI_API_KEY` can come from any OpenAI-compatible endpoint you plan to call. Pick the combo that matches your setup and drop the values into `.env.local`:\n\n| Provider                                      | Example `OPENAI_BASE_URL`          | Example key env                                                         |\n| --------------------------------------------- | ---------------------------------- | ----------------------------------------------------------------------- |\n| Hugging Face Inference Providers router       | `https://router.huggingface.co/v1` | `OPENAI_API_KEY=hf_xxx` (or `HF_TOKEN` legacy alias)                    |\n| llama.cpp server (`llama.cpp --server --api`) | `http://127.0.0.1:8080/v1`         | `OPENAI_API_KEY=sk-local-demo` (any string works; llama.cpp ignores it) |\n| Ollama (with OpenAI-compatible bridge)        | `http://127.0.0.1:11434/v1`        | `OPENAI_API_KEY=ollama`                                                 |\n| OpenRouter                                    | `https://openrouter.ai/api/v1`     | `OPENAI_API_KEY=sk-or-v1-...`                                           |\n| Poe                                           | `https://api.poe.com/v1`           | `OPENAI_API_KEY=pk_...`                                                 |\n\nCheck the root [`.env` template](./.env) for the full list of optional variables you can override.\n\n**Step 2 – Choose where MongoDB lives:** Either provision a managed cluster (for example MongoDB Atlas) or run a local container. Both approaches are described in [Database Options](#database-options). After you have the URI, drop it into `MONGODB_URL` (and, if desired, set `MONGODB_DB_NAME`).\n\n**Step 3 – Install and launch the dev server:**\n\n```bash\ngit clone https://github.com/huggingface/chat-ui\ncd chat-ui\nnpm install\nnpm run dev -- --open\n```\n\nYou now have Chat UI running against the Hugging Face router without needing to host MongoDB yourself.\n\n## Database Options\n\nChat history, users, settings, files, and stats all live in MongoDB. You can point Chat UI at any MongoDB 6/7 deployment.\n\n### MongoDB Atlas (managed)\n\n1. Create a free cluster at [mongodb.com](https://www.mongodb.com/pricing).\n2. Add your IP (or `0.0.0.0/0` for development) to the network access list.\n3. Create a database user and copy the connection string.\n4. Paste that string into `MONGODB_URL` in `.env.local`. Keep the default `MONGODB_DB_NAME=chat-ui` or change it per environment.\n\nAtlas keeps MongoDB off your laptop, which is ideal for teams or cloud deployments.\n\n### Local MongoDB (container)\n\nIf you prefer to run MongoDB locally:\n\n```bash\ndocker run -d -p 27017:27017 --name mongo-chatui mongo:latest\n```\n\nThen set `MONGODB_URL=mongodb://localhost:27017` in `.env.local`. You can also supply `MONGO_STORAGE_PATH` if you want Chat UI’s fallback in-memory server to persist under a specific folder.\n\n## Launch\n\nAfter configuring your environment variables, start Chat UI with:\n\n```bash\nnpm install\nnpm run dev\n```\n\nThe dev server listens on `http://localhost:5173` by default. Use `npm run build` / `npm run preview` for production builds.\n\n## Optional Docker Image\n\nPrefer containerized setup? You can run everything in one container as long as you supply a MongoDB URI (local or hosted):\n\n```bash\ndocker run \\\n  -p 3000 \\\n  -e MONGODB_URL=mongodb://host.docker.internal:27017 \\\n  -e OPENAI_BASE_URL=https://router.huggingface.co/v1 \\\n  -e OPENAI_API_KEY=hf_*** \\\n  -v db:/data \\\n  ghcr.io/huggingface/chat-ui-db:latest\n```\n\n`host.docker.internal` lets the container reach a MongoDB instance on your host machine; swap it for your Atlas URI if you use the hosted option. All environment variables accepted in `.env.local` can be provided as `-e` flags.\n\n## Extra parameters\n\n### Theming\n\nYou can use a few environment variables to customize the look and feel of chat-ui. These are by default:\n\n```env\nPUBLIC_APP_NAME=ChatUI\nPUBLIC_APP_ASSETS=chatui\nPUBLIC_APP_DESCRIPTION=\"Making the community's best AI chat models available to everyone.\"\nPUBLIC_APP_DATA_SHARING=\n```\n\n- `PUBLIC_APP_NAME` The name used as a title throughout the app.\n- `PUBLIC_APP_ASSETS` Is used to find logos \u0026 favicons in `static/$PUBLIC_APP_ASSETS`, current options are `chatui` and `huggingchat`.\n- `PUBLIC_APP_DATA_SHARING` Can be set to 1 to add a toggle in the user settings that lets your users opt-in to data sharing with models creator.\n\n### Models\n\nThis build does not use the `MODELS` env var or GGUF discovery. Configure models via `OPENAI_BASE_URL` only; Chat UI will fetch `${OPENAI_BASE_URL}/models` and populate the list automatically. Authorization uses `OPENAI_API_KEY` (preferred). `HF_TOKEN` remains a legacy alias.\n\n### LLM Router (Optional)\n\nChat UI can perform client-side routing [katanemo/Arch-Router-1.5B](https://huggingface.co/katanemo/Arch-Router-1.5B) as the routing model without running a separate router service. The UI exposes a virtual model alias called \"Omni\" (configurable) that, when selected, chooses the best route/model for each message.\n\n- Provide a routes policy JSON via `LLM_ROUTER_ROUTES_PATH`. No sample file ships with this branch, so you must point the variable to a JSON array you create yourself (for example, commit one in your project like `config/routes.chat.json`). Each route entry needs `name`, `description`, `primary_model`, and optional `fallback_models`.\n- Configure the Arch router selection endpoint with `LLM_ROUTER_ARCH_BASE_URL` (OpenAI-compatible `/chat/completions`) and `LLM_ROUTER_ARCH_MODEL` (e.g. `router/omni`). The Arch call reuses `OPENAI_API_KEY` for auth.\n- Map `other` to a concrete route via `LLM_ROUTER_OTHER_ROUTE` (default: `casual_conversation`). If Arch selection fails, calls fall back to `LLM_ROUTER_FALLBACK_MODEL`.\n- Selection timeout can be tuned via `LLM_ROUTER_ARCH_TIMEOUT_MS` (default 10000).\n- Omni alias configuration: `PUBLIC_LLM_ROUTER_ALIAS_ID` (default `omni`), `PUBLIC_LLM_ROUTER_DISPLAY_NAME` (default `Omni`), and optional `PUBLIC_LLM_ROUTER_LOGO_URL`.\n\nWhen you select Omni in the UI, Chat UI will:\n\n- Call the Arch endpoint once (non-streaming) to pick the best route for the last turns.\n- Emit RouterMetadata immediately (route and actual model used) so the UI can display it.\n- Stream from the selected model via your configured `OPENAI_BASE_URL`. On errors, it tries route fallbacks.\n\n## Building\n\nTo create a production version of your app:\n\n```bash\nnpm run build\n```\n\nYou can preview the production build with `npm run preview`.\n\n\u003e To deploy your app, you may need to install an [adapter](https://kit.svelte.dev/docs/adapters) for your target environment.\n","funding_links":[],"categories":["TypeScript","HarmonyOS","Svelte","Others","A01_文本生成_文本对话","\u003cimg src=\"./assets/message-square.svg\" width=\"16\" height=\"16\" style=\"vertical-align: middle;\"\u003e Frontends","Sites","Repos","Project Submissions","推理 Inference","UIs","Web 应用","📚 Contents","chatgpt","Agent Interfaces","Inference UI","Open-Source Local LLM Projects","📋 Contents","Applications","App"],"sub_categories":["Windows Manager","大语言对话模型及数据","Web applications","🖥️ 12. User Interfaces \u0026 Self-hosted Platforms","AI Platforms"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhuggingface%2Fchat-ui","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fhuggingface%2Fchat-ui","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhuggingface%2Fchat-ui/lists"}