{"id":32541123,"url":"https://github.com/fluidinference/fluid-server","last_synced_at":"2025-10-28T15:57:50.315Z","repository":{"id":312018662,"uuid":"1035654796","full_name":"FluidInference/fluid-server","owner":"FluidInference","description":"Local AI server for your Windows apps.","archived":false,"fork":false,"pushed_at":"2025-09-30T00:11:34.000Z","size":796,"stargazers_count":3,"open_issues_count":0,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2025-09-30T02:26:21.383Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/FluidInference.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":"AGENTS.md","dco":null,"cla":null}},"created_at":"2025-08-10T21:22:19.000Z","updated_at":"2025-09-30T00:11:37.000Z","dependencies_parsed_at":"2025-08-28T09:44:18.109Z","dependency_job_id":"4ffe20dc-e3c3-4198-ab11-eb6b7d883bb2","html_url":"https://github.com/FluidInference/fluid-server","commit_stats":null,"previous_names":["fluidinference/fluid-server"],"tags_count":1,"template":false,"template_full_name":null,"purl":"pkg:github/FluidInference/fluid-server","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/FluidInference%2Ffluid-server","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/FluidInference%2Ffluid-server/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/FluidInference%2Ffluid-server/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/FluidInference%2Ffluid-server/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/FluidInference","download_url":"https://codeload.github.com/FluidInference/fluid-server/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/FluidInference%2Ffluid-server/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":281467277,"owners_count":26506462,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-10-28T02:00:06.022Z","response_time":60,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-10-28T15:57:44.715Z","updated_at":"2025-10-28T15:57:50.308Z","avatar_url":"https://github.com/FluidInference.png","language":"Python","readme":"# Fluid Server: Local AI server for your Windows apps\n\n[![Discord](https://img.shields.io/badge/Discord-Join%20Chat-7289da.svg)](https://discord.gg/WNsvaCtmDe)\n[![Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-blue)](https://huggingface.co/collections/FluidInference)\n\n**THIS PROJECT IS UNDER ACTIVE DEVELOPMENT** Its not ready for production usage but serves as a good reference for hwo to run whisper on Qualcomm and Intel NPUs\n\nA portable, packaged OpenAI-compatible server for Windows desktop applications. LLM, Transcription, embeddings, and vector DB, all out of the box.\n\nNote that this does require you to run the .exe as a sepearte async process, like a local serving server in your application, and you will need to make requests to serve inference.\n\n## Features\n\n**Core Capabilities**\n- **LLM Chat Completions** - OpenAI-compatible API with streaming, backed by llama.cpp and OpenVINO \n- **Audio Transcription** - Whisper models with NPU acceleration, backed by OpenVINO and Qualcomm QNN\n- **Text Embeddings** - Vector embeddings for search and RAG\n- **Vector Database** - LanceDB integration for multimodal storage\n\n**Hardware Acceleration**\n- **Intel NPU** via OpenVINO backend\n- **Qualcomm NPU** via QNN (Snapdragon X Elite)\n- **Vulkan GPU** via llama-cpp\n\n## Quick Start\n\n### 1. Download or Build\n\n**Option A: Download Release**\n- Download `fluid-server.exe` from [releases](https://github.com/FluidInference/fluid-server/releases)\n\n**Option B: Run from Source**\n```powershell\n# Install dependencies and run\nuv sync\nuv run\n```\n\n### 2. Run the Server\n\n```powershell\n# Run with default settings\n.\\dist\\fluid-server.exe\n\n# Or with custom options\n.\\dist\\fluid-server.exe --host 127.0.0.1 --port 8080\n```\n\n### 3. Test the API\n\n- **Health Check**: http://localhost:8080/health\n- **API Docs**: http://localhost:8080/docs\n- **Models**: http://localhost:8080/v1/models\n\n## Usage Examples\n\n### Basic Chat Completion\n\n```bash\ncurl -X POST http://localhost:8080/v1/chat/completions \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\"model\": \"qwen3-8b-int8-ov\", \"messages\": [{\"role\": \"user\", \"content\": \"Hello!\"}]}'\n```\n\n### Python Integration\n\n```python\nfrom openai import OpenAI\n\nclient = OpenAI(base_url=\"http://localhost:8080/v1\", api_key=\"local\")\n\n# Chat with streaming\nfor chunk in client.chat.completions.create(\n    model=\"qwen3-8b-int8-ov\",\n    messages=[{\"role\": \"user\", \"content\": \"Hello!\"}],\n    stream=True\n):\n    print(chunk.choices[0].delta.content or \"\", end=\"\")\n```\n\n### Audio Transcription\n\n```bash\ncurl -X POST http://localhost:8080/v1/audio/transcriptions \\\n  -F \"file=@audio.wav\" \\\n  -F \"model=whisper-large-v3-turbo-qnn\"\n```\n\n## Documentation\n\n📖 **Comprehensive Guides**\n- [NPU Support Guide](docs/npu-support.md) - Intel \u0026 Qualcomm NPU configuration\n- [Integration Guide](docs/integration-guide.md) - Python, .NET, Node.js examples\n- [Development Guide](docs/development.md) - Setup, building, and contributing\n- [LanceDB Integration](docs/lancedb.md) - Vector database and embeddings\n- [GGUF Model Support](docs/GGUF-model-support.md) - Using any GGUF model\n- [Compilation Guide](docs/compilation-guide.md) - Build system details\n\n## FAQ\n\n**Why Python?** Best ML ecosystem support and PyInstaller packaging.\n\n**Why not llama.cpp?** We support multiple runtimes and AI accelerators beyond GGML.\n\n## Acknowledgements\n\nBuilt using `ty`, `FastAPI`, `Pydantic`, `ONNX Runtime`, `OpenAI Whisper`, and various other AI libraries.\n\n**Runtime Technologies:**\n- `OpenVINO` - Intel NPU and GPU acceleration\n- `Qualcomm QNN` - Snapdragon NPU optimization with HTP backend\n- `ONNX Runtime` - Cross-platform AI inference\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffluidinference%2Ffluid-server","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ffluidinference%2Ffluid-server","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffluidinference%2Ffluid-server/lists"}