{"id":28730571,"url":"https://github.com/deepsense-ai/ragbits","last_synced_at":"2026-02-25T15:16:00.789Z","repository":{"id":258115296,"uuid":"851036791","full_name":"deepsense-ai/ragbits","owner":"deepsense-ai","description":"Building blocks for rapid development of GenAI applications ","archived":false,"fork":false,"pushed_at":"2026-02-18T13:12:21.000Z","size":37720,"stargazers_count":1616,"open_issues_count":53,"forks_count":133,"subscribers_count":8,"default_branch":"main","last_synced_at":"2026-02-18T14:56:08.861Z","etag":null,"topics":["agents","document-search","evaluation","guardrails","llms","optimization","prompts","rag","vector-stores"],"latest_commit_sha":null,"homepage":"https://ragbits.deepsense.ai","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/deepsense-ai.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2024-09-02T10:00:49.000Z","updated_at":"2026-02-17T13:15:59.000Z","dependencies_parsed_at":"2026-01-27T12:07:00.201Z","dependency_job_id":null,"html_url":"https://github.com/deepsense-ai/ragbits","commit_stats":null,"previous_names":["deepsense-ai/ragbits"],"tags_count":110,"template":false,"template_full_name":null,"purl":"pkg:github/deepsense-ai/ragbits","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/deepsense-ai%2Fragbits","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/deepsense-ai%2Fragbits/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/deepsense-ai%2Fragbits/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/deepsense-ai%2Fragbits/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/deepsense-ai","download_url":"https://codeload.github.com/deepsense-ai/ragbits/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/deepsense-ai%2Fragbits/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29823723,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-02-25T13:05:48.164Z","status":"ssl_error","status_checked_at":"2026-02-25T13:05:26.658Z","response_time":61,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["agents","document-search","evaluation","guardrails","llms","optimization","prompts","rag","vector-stores"],"created_at":"2025-06-15T18:01:22.228Z","updated_at":"2026-02-25T15:16:00.783Z","avatar_url":"https://github.com/deepsense-ai.png","language":"Python","readme":"\u003cdiv align=\"center\"\u003e\n\n\u003ch1\u003e🐰 Ragbits\u003c/h1\u003e\n\n*Building blocks for rapid development of GenAI applications*\n\n[Homepage](https://deepsense.ai/rd-hub/ragbits/) | [Documentation](https://ragbits.deepsense.ai) | [Contact](https://deepsense.ai/contact/)\n\n\u003ca href=\"https://trendshift.io/repositories/13966\" target=\"_blank\"\u003e\u003cimg src=\"https://trendshift.io/api/badge/repositories/13966\" alt=\"deepsense-ai%2Fragbits | Trendshift\" style=\"width: 250px; height: 55px;\" width=\"250\" height=\"55\"/\u003e\u003c/a\u003e\n\n\n[![PyPI - License](https://img.shields.io/pypi/l/ragbits)](https://pypi.org/project/ragbits)\n[![PyPI - Version](https://img.shields.io/pypi/v/ragbits)](https://pypi.org/project/ragbits)\n[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/ragbits)](https://pypi.org/project/ragbits)\n\n\u003c/div\u003e\n\n---\n\n## Features\n\n### 🔨 Build Reliable \u0026 Scalable GenAI Apps\n\n- **Swap LLMs anytime** – Switch between [100+ LLMs via LiteLLM](https://ragbits.deepsense.ai/stable/how-to/llms/use_llms/) or run [local models](https://ragbits.deepsense.ai/stable/how-to/llms/use_local_llms/).\n- **Type-safe LLM calls** – Use Python generics to [enforce strict type safety](https://ragbits.deepsense.ai/stable/how-to/prompts/use_prompting/#how-to-configure-prompts-output-data-type) in model interactions.\n- **Bring your own vector store** – Connect to [Qdrant](https://ragbits.deepsense.ai/stable/api_reference/core/vector-stores/#ragbits.core.vector_stores.qdrant.QdrantVectorStore), [PgVector](https://ragbits.deepsense.ai/stable/api_reference/core/vector-stores/#ragbits.core.vector_stores.pgvector.PgVectorStore), and more with built-in support.\n- **Developer tools included** – [Manage vector stores](https://ragbits.deepsense.ai/stable/cli/main/#ragbits-vector-store), query pipelines, and [test prompts from your terminal](https://ragbits.deepsense.ai/stable/quickstart/quickstart1_prompts/#testing-the-prompt-from-the-cli).\n- **Modular installation** – Install only what you need, reducing dependencies and improving performance.\n\n### 📚 Fast \u0026 Flexible RAG Processing\n\n- **Ingest 20+ formats** – Process PDFs, HTML, spreadsheets, presentations, and more. Process data using [Docling](https://github.com/docling-project/docling), [Unstructured](https://github.com/Unstructured-IO/unstructured) or create a custom parser.\n- **Handle complex data** – Extract tables, images, and structured content with built-in VLMs support.\n- **Connect to any data source** – Use prebuilt connectors for S3, GCS, Azure, or implement your own.\n- **Scale ingestion** – Process large datasets quickly with [Ray-based parallel processing](https://ragbits.deepsense.ai/stable/how-to/document_search/distributed_ingestion/#how-to-ingest-documents-in-a-distributed-fashion).\n\n### 🤖 Build Multi-Agent Workflows with Ease\n\n- **Multi-agent coordination** – Create teams of specialized agents with role-based collaboration using [A2A protocol](https://ragbits.deepsense.ai/stable/tutorials/agents) for interoperability.\n- **Real-time data integration** – Leverage [Model Context Protocol (MCP)](https://ragbits.deepsense.ai/stable/how-to/agents/provide_mcp_tools) for live web access, database queries, and API integrations.\n- **Conversation state management** – Maintain context across interactions with [automatic history tracking](https://ragbits.deepsense.ai/stable/how-to/agents/define_and_use_agents/#conversation-history).\n\n### 🚀 Deploy \u0026 Monitor with Confidence\n\n- **Real-time observability** – Track performance with [OpenTelemetry](https://ragbits.deepsense.ai/stable/how-to/project/use_tracing/#opentelemetry-trace-handler) and [CLI insights](https://ragbits.deepsense.ai/stable/how-to/project/use_tracing/#cli-trace-handler).\n- **Built-in testing** – Validate prompts [with promptfoo](https://ragbits.deepsense.ai/stable/how-to/prompts/promptfoo/) before deployment.\n- **Auto-optimization** – Continuously evaluate and refine model performance.\n- **Chat UI** – Deploy [chatbot interface](https://ragbits.deepsense.ai/stable/how-to/chatbots/api/) with API, persistance and user feedback.\n\n## Installation\n\n### Stable Release\n\nTo get started quickly, you can install the latest stable release with:\n\n```sh\npip install ragbits\n```\n\n### Nightly Builds\n\nFor the latest development features, you can install nightly builds that are automatically published from the `develop` branch:\n\n```sh\npip install ragbits --pre\n```\n\n**Note:** Nightly builds include the latest features and bug fixes but may be less stable than official releases. They follow the version format `X.Y.Z.devYYYYMMDDHHMM`.\n\n### Package Contents\n\nThis is a starter bundle of packages, containing:\n\n- [`ragbits-core`](https://github.com/deepsense-ai/ragbits/tree/main/packages/ragbits-core) - fundamental tools for working with prompts, LLMs and vector databases.\n- [`ragbits-agents`](https://github.com/deepsense-ai/ragbits/tree/main/packages/ragbits-agents) - abstractions for building agentic systems.\n- [`ragbits-document-search`](https://github.com/deepsense-ai/ragbits/tree/main/packages/ragbits-document-search) - retrieval and ingestion piplines for knowledge bases.\n- [`ragbits-evaluate`](https://github.com/deepsense-ai/ragbits/tree/main/packages/ragbits-evaluate) - unified evaluation framework for Ragbits components.\n- [`ragbits-guardrails`](https://github.com/deepsense-ai/ragbits/tree/main/packages/ragbits-guardrails) - utilities for ensuring the safety and relevance of responses.\n- [`ragbits-chat`](https://github.com/deepsense-ai/ragbits/tree/main/packages/ragbits-chat) - full-stack infrastructure for building conversational AI applications.\n- [`ragbits-cli`](https://github.com/deepsense-ai/ragbits/tree/main/packages/ragbits-cli) - `ragbits` shell command for interacting with Ragbits components.\n\nAlternatively, you can use individual components of the stack by installing their respective packages.\n\n## Quickstart\n\n### Basics\n\nTo define a prompt and run LLM:\n\n```python\nimport asyncio\nfrom pydantic import BaseModel\nfrom ragbits.core.llms import LiteLLM\nfrom ragbits.core.prompt import Prompt\n\nclass QuestionAnswerPromptInput(BaseModel):\n    question: str\n\nclass QuestionAnswerPrompt(Prompt[QuestionAnswerPromptInput, str]):\n    system_prompt = \"\"\"\n    You are a question answering agent. Answer the question to the best of your ability.\n    \"\"\"\n    user_prompt = \"\"\"\n    Question: {{ question }}\n    \"\"\"\n\nllm = LiteLLM(model_name=\"gpt-4.1-nano\")\n\nasync def main() -\u003e None:\n    prompt = QuestionAnswerPrompt(QuestionAnswerPromptInput(question=\"What are high memory and low memory on linux?\"))\n    response = await llm.generate(prompt)\n    print(response)\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n### Document Search\n\nTo build and query a simple vector store index:\n\n```python\nimport asyncio\nfrom ragbits.core.embeddings import LiteLLMEmbedder\nfrom ragbits.core.vector_stores import InMemoryVectorStore\nfrom ragbits.document_search import DocumentSearch\n\nembedder = LiteLLMEmbedder(model_name=\"text-embedding-3-small\")\nvector_store = InMemoryVectorStore(embedder=embedder)\ndocument_search = DocumentSearch(vector_store=vector_store)\n\nasync def run() -\u003e None:\n    await document_search.ingest(\"web://https://arxiv.org/pdf/1706.03762\")\n    result = await document_search.search(\"What are the key findings presented in this paper?\")\n    print(result)\n\nif __name__ == \"__main__\":\n    asyncio.run(run())\n```\n\n### Retrieval-Augmented Generation\n\nTo build a simple RAG pipeline:\n\n```python\nimport asyncio\nfrom collections.abc import Iterable\nfrom pydantic import BaseModel\nfrom ragbits.core.embeddings import LiteLLMEmbedder\nfrom ragbits.core.llms import LiteLLM\nfrom ragbits.core.prompt import Prompt\nfrom ragbits.core.vector_stores import InMemoryVectorStore\nfrom ragbits.document_search import DocumentSearch\nfrom ragbits.document_search.documents.element import Element\n\nclass QuestionAnswerPromptInput(BaseModel):\n    question: str\n    context: Iterable[Element]\n\nclass QuestionAnswerPrompt(Prompt[QuestionAnswerPromptInput, str]):\n    system_prompt = \"\"\"\n    You are a question answering agent. Answer the question that will be provided using context.\n    If in the given context there is not enough information refuse to answer.\n    \"\"\"\n    user_prompt = \"\"\"\n    Question: {{ question }}\n    Context: {% for chunk in context %}{{ chunk.text_representation }}{%- endfor %}\n    \"\"\"\n\nllm = LiteLLM(model_name=\"gpt-4.1-nano\")\nembedder = LiteLLMEmbedder(model_name=\"text-embedding-3-small\")\nvector_store = InMemoryVectorStore(embedder=embedder)\ndocument_search = DocumentSearch(vector_store=vector_store)\n\nasync def run() -\u003e None:\n    question = \"What are the key findings presented in this paper?\"\n\n    await document_search.ingest(\"web://https://arxiv.org/pdf/1706.03762\")\n    chunks = await document_search.search(question)\n\n    prompt = QuestionAnswerPrompt(QuestionAnswerPromptInput(question=question, context=chunks))\n    response = await llm.generate(prompt)\n    print(response)\n\nif __name__ == \"__main__\":\n    asyncio.run(run())\n```\n\n### Agentic RAG\n\nTo build an agentic RAG pipeline:\n\n```python\nimport asyncio\nfrom ragbits.agents import Agent\nfrom ragbits.core.embeddings import LiteLLMEmbedder\nfrom ragbits.core.llms import LiteLLM\nfrom ragbits.core.vector_stores import InMemoryVectorStore\nfrom ragbits.document_search import DocumentSearch\n\nembedder = LiteLLMEmbedder(model_name=\"text-embedding-3-small\")\nvector_store = InMemoryVectorStore(embedder=embedder)\ndocument_search = DocumentSearch(vector_store=vector_store)\n\nllm = LiteLLM(model_name=\"gpt-4.1-nano\")\nagent = Agent(llm=llm, tools=[document_search.search])\n\nasync def main() -\u003e None:\n    await document_search.ingest(\"web://https://arxiv.org/pdf/1706.03762\")\n    response = await agent.run(\"What are the key findings presented in this paper?\")\n    print(response.content)\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n### Chat UI\n\nTo expose your GenAI application through Ragbits API:\n\n```python\nfrom collections.abc import AsyncGenerator\nfrom ragbits.agents import Agent, ToolCallResult\nfrom ragbits.chat.api import RagbitsAPI\nfrom ragbits.chat.interface import ChatInterface\nfrom ragbits.chat.interface.types import ChatContext, ChatResponse, LiveUpdateType\nfrom ragbits.core.embeddings import LiteLLMEmbedder\nfrom ragbits.core.llms import LiteLLM, ToolCall\nfrom ragbits.core.prompt import ChatFormat\nfrom ragbits.core.vector_stores import InMemoryVectorStore\nfrom ragbits.document_search import DocumentSearch\n\nembedder = LiteLLMEmbedder(model_name=\"text-embedding-3-small\")\nvector_store = InMemoryVectorStore(embedder=embedder)\ndocument_search = DocumentSearch(vector_store=vector_store)\n\nllm = LiteLLM(model_name=\"gpt-4.1-nano\")\nagent = Agent(llm=llm, tools=[document_search.search])\n\nclass MyChat(ChatInterface):\n    async def setup(self) -\u003e None:\n        await document_search.ingest(\"web://https://arxiv.org/pdf/1706.03762\")\n\n    async def chat(\n        self,\n        message: str,\n        history: ChatFormat,\n        context: ChatContext,\n    ) -\u003e AsyncGenerator[ChatResponse]:\n        async for result in agent.run_streaming(message):\n            match result:\n                case str():\n                    yield self.create_live_update(\n                        update_id=\"1\",\n                        type=LiveUpdateType.START,\n                        label=\"Answering...\",\n                    )\n                    yield self.create_text_response(result)\n                case ToolCall():\n                    yield self.create_live_update(\n                        update_id=\"2\",\n                        type=LiveUpdateType.START,\n                        label=\"Searching...\",\n                    )\n                case ToolCallResult():\n                    yield self.create_live_update(\n                        update_id=\"2\",\n                        type=LiveUpdateType.FINISH,\n                        label=\"Search\",\n                        description=f\"Found {len(result.result)} relevant chunks.\",\n                    )\n\n        yield self.create_live_update(\n            update_id=\"1\",\n            type=LiveUpdateType.FINISH,\n            label=\"Answer\",\n        )\n\nif __name__ == \"__main__\":\n    api = RagbitsAPI(MyChat)\n    api.run()\n```\n\n## Rapid development\n\nCreate Ragbits projects from templates:\n\n```sh\nuvx create-ragbits-app\n```\n\nExplore `create-ragbits-app` repo [here](https://github.com/deepsense-ai/create-ragbits-app). If you have a new idea for a template, feel free to contribute!\n\n## Documentation\n\n- [Tutorials](https://ragbits.deepsense.ai/stable/tutorials/intro) - Get started with Ragbits in a few minutes\n- [How-to](https://ragbits.deepsense.ai/stable/how-to/prompts/use_prompting) - Learn how to use Ragbits in your projects\n- [CLI](https://ragbits.deepsense.ai/stable/cli/main) - Learn how to run Ragbits in your terminal\n- [API reference](https://ragbits.deepsense.ai/stable/api_reference/core/prompt) - Explore the underlying Ragbits API\n\n## Contributing\n\nWe welcome contributions! Please read [CONTRIBUTING.md](https://github.com/deepsense-ai/ragbits/tree/main/CONTRIBUTING.md) for more information.\n\n## License\n\nRagbits is licensed under the [MIT License](https://github.com/deepsense-ai/ragbits/tree/main/LICENSE).\n","funding_links":[],"categories":["A01_文本生成_文本对话","Python","Repos","Tools","Agent Integration \u0026 Deployment Tools"],"sub_categories":["大语言对话模型及数据","Agent Frameworks","Stateful Serverless Frameworks"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdeepsense-ai%2Fragbits","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdeepsense-ai%2Fragbits","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdeepsense-ai%2Fragbits/lists"}