{"id":13737385,"url":"https://github.com/livekit/agents","last_synced_at":"2026-04-16T02:00:56.531Z","repository":{"id":217894792,"uuid":"707441527","full_name":"livekit/agents","owner":"livekit","description":"A framework for building realtime voice AI agents 🤖🎙️📹 ","archived":false,"fork":false,"pushed_at":"2026-04-10T04:06:35.000Z","size":26305,"stargazers_count":9987,"open_issues_count":546,"forks_count":2999,"subscribers_count":97,"default_branch":"main","last_synced_at":"2026-04-10T04:08:05.641Z","etag":null,"topics":["agents","ai","openai","real-time","video","voice"],"latest_commit_sha":null,"homepage":"https://docs.livekit.io/agents","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/livekit.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":"NOTICE","maintainers":null,"copyright":null,"agents":"AGENTS.md","dco":null,"cla":null}},"created_at":"2023-10-19T23:00:55.000Z","updated_at":"2026-04-10T03:03:19.000Z","dependencies_parsed_at":"2024-03-12T02:43:40.228Z","dependency_job_id":"4bfea65a-b27e-416e-8442-c2f703974852","html_url":"https://github.com/livekit/agents","commit_stats":{"total_commits":913,"total_committers":62,"mean_commits":"14.725806451612904","dds":0.6725082146768894,"last_synced_commit":"37bbfccb0166b174c3cb399497f6b7465f97311b"},"previous_names":["livekit/agents","livekit/python-agents"],"tags_count":353,"template":false,"template_full_name":null,"purl":"pkg:github/livekit/agents","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/livekit%2Fagents","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/livekit%2Fagents/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/livekit%2Fagents/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/livekit%2Fagents/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/livekit","download_url":"https://codeload.github.com/livekit/agents/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/livekit%2Fagents/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31820372,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-14T18:05:02.291Z","status":"ssl_error","status_checked_at":"2026-04-14T18:05:01.765Z","response_time":153,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["agents","ai","openai","real-time","video","voice"],"created_at":"2024-08-03T03:01:45.998Z","updated_at":"2026-04-16T02:00:56.518Z","avatar_url":"https://github.com/livekit.png","language":"Python","readme":"\u003c!--BEGIN_BANNER_IMAGE--\u003e\n\n\u003cpicture\u003e\n  \u003csource media=\"(prefers-color-scheme: dark)\" srcset=\"/.github/banner_dark.png\"\u003e\n  \u003csource media=\"(prefers-color-scheme: light)\" srcset=\"/.github/banner_light.png\"\u003e\n  \u003cimg style=\"width:100%;\" alt=\"The LiveKit icon, the name of the repository and some sample code in the background.\" src=\"https://raw.githubusercontent.com/livekit/agents/main/.github/banner_light.png\"\u003e\n\u003c/picture\u003e\n\n\u003c!--END_BANNER_IMAGE--\u003e\n\u003cbr /\u003e\n\n![PyPI - Version](https://img.shields.io/pypi/v/livekit-agents)\n[![PyPI Downloads](https://static.pepy.tech/badge/livekit-agents/month)](https://pepy.tech/projects/livekit-agents)\n[![Slack community](https://img.shields.io/endpoint?url=https%3A%2F%2Flivekit.io%2Fbadges%2Fslack)](https://livekit.io/join-slack)\n[![Twitter Follow](https://img.shields.io/twitter/follow/livekit)](https://twitter.com/livekit)\n[![Ask DeepWiki for understanding the codebase](https://deepwiki.com/badge.svg)](https://deepwiki.com/livekit/agents)\n[![License](https://img.shields.io/github/license/livekit/livekit)](https://github.com/livekit/livekit/blob/master/LICENSE)\n\n\u003cbr /\u003e\n\nLooking for the JS/TS library? Check out [AgentsJS](https://github.com/livekit/agents-js)\n\n## What is Agents?\n\n\u003c!--BEGIN_DESCRIPTION--\u003e\n\nThe Agent Framework is designed for building realtime, programmable participants\nthat run on servers. Use it to create conversational, multi-modal voice\nagents that can see, hear, and understand.\n\n\u003c!--END_DESCRIPTION--\u003e\n\n## Features\n\n- **Flexible integrations**: A comprehensive ecosystem to mix and match the right STT, LLM, TTS, and Realtime API to suit your use case.\n- **Integrated job scheduling**: Built-in task scheduling and distribution with [dispatch APIs](https://docs.livekit.io/agents/build/dispatch/) to connect end users to agents.\n- **Extensive WebRTC clients**: Build client applications using LiveKit's open-source SDK ecosystem, supporting all major platforms.\n- **Telephony integration**: Works seamlessly with LiveKit's [telephony stack](https://docs.livekit.io/sip/), allowing your agent to make calls to or receive calls from phones.\n- **Exchange data with clients**: Use [RPCs](https://docs.livekit.io/home/client/data/rpc/) and other [Data APIs](https://docs.livekit.io/home/client/data/) to seamlessly exchange data with clients.\n- **Semantic turn detection**: Uses a transformer model to detect when a user is done with their turn, helps to reduce interruptions.\n- **MCP support**: Native support for MCP. Integrate tools provided by MCP servers with one loc.\n- **Builtin test framework**: Write tests and use judges to ensure your agent is performing as expected.\n- **Open-source**: Fully open-source, allowing you to run the entire stack on your own servers, including [LiveKit server](https://github.com/livekit/livekit), one of the most widely used WebRTC media servers.\n\n## Installation\n\nTo install the core Agents library, along with plugins for popular model providers:\n\n```bash\npip install \"livekit-agents[openai,silero,deepgram,cartesia,turn-detector]~=1.4\"\n```\n\n## Docs and guides\n\nDocumentation on the framework and how to use it can be found [here](https://docs.livekit.io/agents/)\n\n### Building with AI coding agents\n\nIf you're using an AI coding assistant to build with LiveKit Agents, we recommend the following setup for the best results:\n\n1. **Install the [LiveKit Docs MCP server](https://docs.livekit.io/mcp)** — Gives your coding agent access to up-to-date LiveKit documentation, code search across LiveKit repositories, and working examples.\n\n2. **Install the [LiveKit Agent Skill](https://github.com/livekit/agent-skills)** — Provides your coding agent with architectural guidance and best practices for building voice AI applications, including workflow design, handoffs, tasks, and testing patterns.\n\n   ```shell\n   npx skills add livekit/agent-skills --skill livekit-agents\n   ```\n\nThe Agent Skill works best alongside the MCP server: the skill teaches your agent *how to approach* building with LiveKit, while the MCP server provides the *current API details* to implement it correctly.\n\n## Core concepts\n\n- Agent: An LLM-based application with defined instructions.\n- AgentSession: A container for agents that manages interactions with end users.\n- entrypoint: The starting point for an interactive session, similar to a request handler in a web server.\n- AgentServer: The main process that coordinates job scheduling and launches agents for user sessions.\n\n## Usage\n\n### Simple voice agent\n\n---\n\n```python\nfrom livekit.agents import (\n    Agent,\n    AgentServer,\n    AgentSession,\n    JobContext,\n    RunContext,\n    cli,\n    function_tool,\n    inference,\n)\nfrom livekit.plugins import silero\n\n\n@function_tool\nasync def lookup_weather(\n    context: RunContext,\n    location: str,\n):\n    \"\"\"Used to look up weather information.\"\"\"\n\n    return {\"weather\": \"sunny\", \"temperature\": 70}\n\n\nserver = AgentServer()\n\n\n@server.rtc_session()\nasync def entrypoint(ctx: JobContext):\n    session = AgentSession(\n        vad=silero.VAD.load(),\n        # any combination of STT, LLM, TTS, or realtime API can be used\n        # this example shows LiveKit Inference, a unified API to access different models via LiveKit Cloud\n        # to use model provider keys directly, replace with the following:\n        # from livekit.plugins import deepgram, openai, cartesia\n        # stt=deepgram.STT(model=\"nova-3\"),\n        # llm=openai.LLM(model=\"gpt-4.1-mini\"),\n        # tts=cartesia.TTS(model=\"sonic-3\", voice=\"9626c31c-bec5-4cca-baa8-f8ba9e84c8bc\"),\n        stt=inference.STT(\"deepgram/nova-3\", language=\"multi\"),\n        llm=inference.LLM(\"openai/gpt-4.1-mini\"),\n        tts=inference.TTS(\"cartesia/sonic-3\", voice=\"9626c31c-bec5-4cca-baa8-f8ba9e84c8bc\"),\n    )\n\n    agent = Agent(\n        instructions=\"You are a friendly voice assistant built by LiveKit.\",\n        tools=[lookup_weather],\n    )\n\n    await session.start(agent=agent, room=ctx.room)\n    await session.generate_reply(instructions=\"greet the user and ask about their day\")\n\n\nif __name__ == \"__main__\":\n    cli.run_app(server)\n```\n\nYou'll need the following environment variables for this example:\n\n- LIVEKIT_URL\n- LIVEKIT_API_KEY\n- LIVEKIT_API_SECRET\n\n### Multi-agent handoff\n\n---\n\nThis code snippet is abbreviated. For the full example, see [multi_agent.py](examples/voice_agents/multi_agent.py)\n\n```python\n...\nclass IntroAgent(Agent):\n    def __init__(self) -\u003e None:\n        super().__init__(\n            instructions=f\"You are a story teller. Your goal is to gather a few pieces of information from the user to make the story personalized and engaging.\"\n            \"Ask the user for their name and where they are from\"\n        )\n\n    async def on_enter(self):\n        self.session.generate_reply(instructions=\"greet the user and gather information\")\n\n    @function_tool\n    async def information_gathered(\n        self,\n        context: RunContext,\n        name: str,\n        location: str,\n    ):\n        \"\"\"Called when the user has provided the information needed to make the story personalized and engaging.\n\n        Args:\n            name: The name of the user\n            location: The location of the user\n        \"\"\"\n\n        context.userdata.name = name\n        context.userdata.location = location\n\n        story_agent = StoryAgent(name, location)\n        return story_agent, \"Let's start the story!\"\n\n\nclass StoryAgent(Agent):\n    def __init__(self, name: str, location: str) -\u003e None:\n        super().__init__(\n            instructions=f\"You are a storyteller. Use the user's information in order to make the story personalized.\"\n            f\"The user's name is {name}, from {location}\"\n            # override the default model, switching to Realtime API from standard LLMs\n            llm=openai.realtime.RealtimeModel(voice=\"echo\"),\n            chat_ctx=chat_ctx,\n        )\n\n    async def on_enter(self):\n        self.session.generate_reply()\n\n\n@server.rtc_session()\nasync def entrypoint(ctx: JobContext):\n    userdata = StoryData()\n    session = AgentSession[StoryData](\n        vad=silero.VAD.load(),\n        stt=\"deepgram/nova-3\",\n        llm=\"openai/gpt-4.1-mini\",\n        tts=\"cartesia/sonic-3:9626c31c-bec5-4cca-baa8-f8ba9e84c8bc\",\n        userdata=userdata,\n    )\n\n    await session.start(\n        agent=IntroAgent(),\n        room=ctx.room,\n    )\n...\n```\n\n### Testing\n\nAutomated tests are essential for building reliable agents, especially with the non-deterministic behavior of LLMs. LiveKit Agents include native test integration to help you create dependable agents.\n\n```python\n@pytest.mark.asyncio\nasync def test_no_availability() -\u003e None:\n    llm = google.LLM()\n    async AgentSession(llm=llm) as sess:\n        await sess.start(MyAgent())\n        result = await sess.run(\n            user_input=\"Hello, I need to place an order.\"\n        )\n        result.expect.skip_next_event_if(type=\"message\", role=\"assistant\")\n        result.expect.next_event().is_function_call(name=\"start_order\")\n        result.expect.next_event().is_function_call_output()\n        await (\n            result.expect.next_event()\n            .is_message(role=\"assistant\")\n            .judge(llm, intent=\"assistant should be asking the user what they would like\")\n        )\n\n```\n\n## Examples\n\nFor more examples and detailed setup instructions, see the [examples directory](examples/). For even more examples, see the [python-agents-examples](https://github.com/livekit-examples/python-agents-examples) repository.\n\n\u003ctable\u003e\n\u003ctr\u003e\n\u003ctd width=\"50%\"\u003e\n\u003ch3\u003e🎙️ Starter Agent\u003c/h3\u003e\n\u003cp\u003eA starter agent optimized for voice conversations.\u003c/p\u003e\n\u003cp\u003e\n\u003ca href=\"examples/voice_agents/basic_agent.py\"\u003eCode\u003c/a\u003e\n\u003c/p\u003e\n\u003c/td\u003e\n\u003ctd width=\"50%\"\u003e\n\u003ch3\u003e🔄 Multi-user push to talk\u003c/h3\u003e\n\u003cp\u003eResponds to multiple users in the room via push-to-talk.\u003c/p\u003e\n\u003cp\u003e\n\u003ca href=\"examples/voice_agents/push_to_talk.py\"\u003eCode\u003c/a\u003e\n\u003c/p\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\n\u003ctr\u003e\n\u003ctd width=\"50%\"\u003e\n\u003ch3\u003e🎵 Background audio\u003c/h3\u003e\n\u003cp\u003eBackground ambient and thinking audio to improve realism.\u003c/p\u003e\n\u003cp\u003e\n\u003ca href=\"examples/voice_agents/background_audio.py\"\u003eCode\u003c/a\u003e\n\u003c/p\u003e\n\u003c/td\u003e\n\u003ctd width=\"50%\"\u003e\n\u003ch3\u003e🛠️ Dynamic tool creation\u003c/h3\u003e\n\u003cp\u003eCreating function tools dynamically.\u003c/p\u003e\n\u003cp\u003e\n\u003ca href=\"examples/voice_agents/dynamic_tool_creation.py\"\u003eCode\u003c/a\u003e\n\u003c/p\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\n\u003ctr\u003e\n\u003ctd width=\"50%\"\u003e\n\u003ch3\u003e☎️ Outbound caller\u003c/h3\u003e\n\u003cp\u003eAgent that makes outbound phone calls\u003c/p\u003e\n\u003cp\u003e\n\u003ca href=\"https://github.com/livekit-examples/outbound-caller-python\"\u003eCode\u003c/a\u003e\n\u003c/p\u003e\n\u003c/td\u003e\n\u003ctd width=\"50%\"\u003e\n\u003ch3\u003e📋 Structured output\u003c/h3\u003e\n\u003cp\u003eUsing structured output from LLM to guide TTS tone.\u003c/p\u003e\n\u003cp\u003e\n\u003ca href=\"examples/voice_agents/structured_output.py\"\u003eCode\u003c/a\u003e\n\u003c/p\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\n\u003ctr\u003e\n\u003ctd width=\"50%\"\u003e\n\u003ch3\u003e🔌 MCP support\u003c/h3\u003e\n\u003cp\u003eUse tools from MCP servers\u003c/p\u003e\n\u003cp\u003e\n\u003ca href=\"examples/voice_agents/mcp\"\u003eCode\u003c/a\u003e\n\u003c/p\u003e\n\u003c/td\u003e\n\u003ctd width=\"50%\"\u003e\n\u003ch3\u003e💬 Text-only agent\u003c/h3\u003e\n\u003cp\u003eSkip voice altogether and use the same code for text-only integrations\u003c/p\u003e\n\u003cp\u003e\n\u003ca href=\"examples/other/text_only.py\"\u003eCode\u003c/a\u003e\n\u003c/p\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\n\u003ctr\u003e\n\u003ctd width=\"50%\"\u003e\n\u003ch3\u003e📝 Multi-user transcriber\u003c/h3\u003e\n\u003cp\u003eProduce transcriptions from all users in the room\u003c/p\u003e\n\u003cp\u003e\n\u003ca href=\"examples/other/transcription/multi-user-transcriber.py\"\u003eCode\u003c/a\u003e\n\u003c/p\u003e\n\u003c/td\u003e\n\u003ctd width=\"50%\"\u003e\n\u003ch3\u003e🎥 Video avatars\u003c/h3\u003e\n\u003cp\u003eAdd an AI avatar with Tavus, Hedra, Bithuman, LemonSlice, and more\u003c/p\u003e\n\u003cp\u003e\n\u003ca href=\"examples/avatar_agents/\"\u003eCode\u003c/a\u003e\n\u003c/p\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\n\u003ctr\u003e\n\u003ctd width=\"50%\"\u003e\n\u003ch3\u003e🍽️ Restaurant ordering and reservations\u003c/h3\u003e\n\u003cp\u003eFull example of an agent that handles calls for a restaurant.\u003c/p\u003e\n\u003cp\u003e\n\u003ca href=\"examples/voice_agents/restaurant_agent.py\"\u003eCode\u003c/a\u003e\n\u003c/p\u003e\n\u003c/td\u003e\n\u003ctd width=\"50%\"\u003e\n\u003ch3\u003e👁️ Gemini Live vision\u003c/h3\u003e\n\u003cp\u003eFull example (including iOS app) of Gemini Live agent that can see.\u003c/p\u003e\n\u003cp\u003e\n\u003ca href=\"https://github.com/livekit-examples/vision-demo\"\u003eCode\u003c/a\u003e\n\u003c/p\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\n\u003c/table\u003e\n\n## Running your agent\n\n### Testing in terminal\n\n```shell\npython myagent.py console\n```\n\nRuns your agent in terminal mode, enabling local audio input and output for testing.\nThis mode doesn't require external servers or dependencies and is useful for quickly validating behavior.\n\n### Developing with LiveKit clients\n\n```shell\npython myagent.py dev\n```\n\nStarts the agent server and enables hot reloading when files change. This mode allows each process to host multiple concurrent agents efficiently.\n\nThe agent connects to LiveKit Cloud or your self-hosted server. Set the following environment variables:\n- LIVEKIT_URL\n- LIVEKIT_API_KEY\n- LIVEKIT_API_SECRET\n\nYou can connect using any LiveKit client SDK or telephony integration.\nTo get started quickly, try the [Agents Playground](https://agents-playground.livekit.io/).\n\n### Running for production\n\n```shell\npython myagent.py start\n```\n\nRuns the agent with production-ready optimizations.\n\n## Contributing\n\nThe Agents framework is under active development in a rapidly evolving field. We welcome and appreciate contributions of any kind, be it feedback, bugfixes, features, new plugins and tools, or better documentation. You can file issues under this repo, open a PR, or chat with us in the [LiveKit community](https://docs.livekit.io/intro/community/).\n\n### Development setup\n\nThis project uses [uv](https://docs.astral.sh/uv/) for package management. To install dependencies for development:\n\n```shell\nuv sync --all-extras --dev\n```\n\n### Examples\n\nThis project includes many examples in the [`examples`](examples/) directory. To run them, create the file `examples/.env` with credentials for LiveKit Server and any necessary model providers (see `examples/.env.example`), then run:\n\n```shell\nuv run examples/voice_agents/basic_agent.py dev\n```\n\nFor more information, see the [examples README](examples/README.md).\n\n### Tests\n\nUnit tests are in the `tests` directory and can be run with:\n\n```shell\nuv run pytest tests/test_tools.py\n```\n\nIntegration tests for each plugin require various API credentials and run automatically in GitHub CI for PRs submitted by project maintainers. See the [tests workflow](.github/workflows/tests.yml) for details.\n\n### Formatting\n\nThis project uses [ruff](https://github.com/astral-sh/ruff) for formatting and linting:\n\n```shell\nuv run ruff format\nuv run ruff check --fix\n```\n\n### Documentation\n\nTo generate docs locally with [pdoc](https://github.com/pdoc3/pdoc):\n\n```shell\nuv sync --all-extras --group docs\nuv run --active pdoc --skip-errors --html --output-dir=docs livekit\n```\n\n\u003c!--BEGIN_REPO_NAV--\u003e\n\u003cbr/\u003e\u003ctable\u003e\n\u003cthead\u003e\u003ctr\u003e\u003cth colspan=\"2\"\u003eLiveKit Ecosystem\u003c/th\u003e\u003c/tr\u003e\u003c/thead\u003e\n\u003ctbody\u003e\n\u003ctr\u003e\u003ctd\u003eAgents SDKs\u003c/td\u003e\u003ctd\u003e\u003cb\u003ePython\u003c/b\u003e · \u003ca href=\"https://github.com/livekit/agents-js\"\u003eNode.js\u003c/a\u003e\u003c/td\u003e\u003c/tr\u003e\u003ctr\u003e\u003c/tr\u003e\n\u003ctr\u003e\u003ctd\u003eLiveKit SDKs\u003c/td\u003e\u003ctd\u003e\u003ca href=\"https://github.com/livekit/client-sdk-js\"\u003eBrowser\u003c/a\u003e · \u003ca href=\"https://github.com/livekit/client-sdk-swift\"\u003eSwift\u003c/a\u003e · \u003ca href=\"https://github.com/livekit/client-sdk-android\"\u003eAndroid\u003c/a\u003e · \u003ca href=\"https://github.com/livekit/client-sdk-flutter\"\u003eFlutter\u003c/a\u003e · \u003ca href=\"https://github.com/livekit/client-sdk-react-native\"\u003eReact Native\u003c/a\u003e · \u003ca href=\"https://github.com/livekit/rust-sdks\"\u003eRust\u003c/a\u003e · \u003ca href=\"https://github.com/livekit/node-sdks\"\u003eNode.js\u003c/a\u003e · \u003ca href=\"https://github.com/livekit/python-sdks\"\u003ePython\u003c/a\u003e · \u003ca href=\"https://github.com/livekit/client-sdk-unity\"\u003eUnity\u003c/a\u003e · \u003ca href=\"https://github.com/livekit/client-sdk-unity-web\"\u003eUnity (WebGL)\u003c/a\u003e · \u003ca href=\"https://github.com/livekit/client-sdk-esp32\"\u003eESP32\u003c/a\u003e · \u003ca href=\"https://github.com/livekit/client-sdk-cpp\"\u003eC++\u003c/a\u003e\u003c/td\u003e\u003c/tr\u003e\u003ctr\u003e\u003c/tr\u003e\n\u003ctr\u003e\u003ctd\u003eStarter Apps\u003c/td\u003e\u003ctd\u003e\u003ca href=\"https://github.com/livekit-examples/agent-starter-python\"\u003ePython Agent\u003c/a\u003e · \u003ca href=\"https://github.com/livekit-examples/agent-starter-node\"\u003eTypeScript Agent\u003c/a\u003e · \u003ca href=\"https://github.com/livekit-examples/agent-starter-react\"\u003eReact App\u003c/a\u003e · \u003ca href=\"https://github.com/livekit-examples/agent-starter-swift\"\u003eSwiftUI App\u003c/a\u003e · \u003ca href=\"https://github.com/livekit-examples/agent-starter-android\"\u003eAndroid App\u003c/a\u003e · \u003ca href=\"https://github.com/livekit-examples/agent-starter-flutter\"\u003eFlutter App\u003c/a\u003e · \u003ca href=\"https://github.com/livekit-examples/agent-starter-react-native\"\u003eReact Native App\u003c/a\u003e · \u003ca href=\"https://github.com/livekit-examples/agent-starter-embed\"\u003eWeb Embed\u003c/a\u003e\u003c/td\u003e\u003c/tr\u003e\u003ctr\u003e\u003c/tr\u003e\n\u003ctr\u003e\u003ctd\u003eUI Components\u003c/td\u003e\u003ctd\u003e\u003ca href=\"https://github.com/livekit/components-js\"\u003eReact\u003c/a\u003e · \u003ca href=\"https://github.com/livekit/components-android\"\u003eAndroid Compose\u003c/a\u003e · \u003ca href=\"https://github.com/livekit/components-swift\"\u003eSwiftUI\u003c/a\u003e · \u003ca href=\"https://github.com/livekit/components-flutter\"\u003eFlutter\u003c/a\u003e\u003c/td\u003e\u003c/tr\u003e\u003ctr\u003e\u003c/tr\u003e\n\u003ctr\u003e\u003ctd\u003eServer APIs\u003c/td\u003e\u003ctd\u003e\u003ca href=\"https://github.com/livekit/node-sdks\"\u003eNode.js\u003c/a\u003e · \u003ca href=\"https://github.com/livekit/server-sdk-go\"\u003eGolang\u003c/a\u003e · \u003ca href=\"https://github.com/livekit/server-sdk-ruby\"\u003eRuby\u003c/a\u003e · \u003ca href=\"https://github.com/livekit/server-sdk-kotlin\"\u003eJava/Kotlin\u003c/a\u003e · \u003ca href=\"https://github.com/livekit/python-sdks\"\u003ePython\u003c/a\u003e · \u003ca href=\"https://github.com/livekit/rust-sdks\"\u003eRust\u003c/a\u003e · \u003ca href=\"https://github.com/agence104/livekit-server-sdk-php\"\u003ePHP (community)\u003c/a\u003e · \u003ca href=\"https://github.com/pabloFuente/livekit-server-sdk-dotnet\"\u003e.NET (community)\u003c/a\u003e\u003c/td\u003e\u003c/tr\u003e\u003ctr\u003e\u003c/tr\u003e\n\u003ctr\u003e\u003ctd\u003eResources\u003c/td\u003e\u003ctd\u003e\u003ca href=\"https://docs.livekit.io\"\u003eDocs\u003c/a\u003e · \u003ca href=\"https://docs.livekit.io/mcp\"\u003eDocs MCP Server\u003c/a\u003e · \u003ca href=\"https://github.com/livekit/livekit-cli\"\u003eCLI\u003c/a\u003e · \u003ca href=\"https://cloud.livekit.io\"\u003eLiveKit Cloud\u003c/a\u003e\u003c/td\u003e\u003c/tr\u003e\u003ctr\u003e\u003c/tr\u003e\n\u003ctr\u003e\u003ctd\u003eLiveKit Server OSS\u003c/td\u003e\u003ctd\u003e\u003ca href=\"https://github.com/livekit/livekit\"\u003eLiveKit server\u003c/a\u003e · \u003ca href=\"https://github.com/livekit/egress\"\u003eEgress\u003c/a\u003e · \u003ca href=\"https://github.com/livekit/ingress\"\u003eIngress\u003c/a\u003e · \u003ca href=\"https://github.com/livekit/sip\"\u003eSIP\u003c/a\u003e\u003c/td\u003e\u003c/tr\u003e\u003ctr\u003e\u003c/tr\u003e\n\u003ctr\u003e\u003ctd\u003eCommunity\u003c/td\u003e\u003ctd\u003e\u003ca href=\"https://community.livekit.io\"\u003eDeveloper Community\u003c/a\u003e · \u003ca href=\"https://livekit.io/join-slack\"\u003eSlack\u003c/a\u003e · \u003ca href=\"https://x.com/livekit\"\u003eX\u003c/a\u003e · \u003ca href=\"https://www.youtube.com/@livekit_io\"\u003eYouTube\u003c/a\u003e\u003c/td\u003e\u003c/tr\u003e\n\u003c/tbody\u003e\n\u003c/table\u003e\n\u003c!--END_REPO_NAV--\u003e\n","funding_links":[],"categories":["ai","Python","🤖 AI \u0026 Machine Learning","Repos","Apps \u0026 Products","Agentic Framework","Learning","Chatbots","🎙 Voice Agents","Turn Detection \u0026 Endpointing | 话轮检测与端点检测","Agentic Frameworks","📋 Contents","Audio \u0026 Voice Assistants","Agent Categories","Voice and Multimodal Agents"],"sub_categories":["Open-Source Applications","Repositories","Open-Source Voice","Intelligent Turn Detection Models | 智能话轮检测模型","🖥️ 12. User Interfaces \u0026 Self-hosted Platforms","Human-in-the-Loop Agents","\u003ca name=\"Unclassified\"\u003e\u003c/a\u003eUnclassified","Codex Resources"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Flivekit%2Fagents","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Flivekit%2Fagents","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Flivekit%2Fagents/lists"}