https://github.com/openai/openai-agents-python
A lightweight, powerful framework for multi-agent workflows
https://github.com/openai/openai-agents-python
agents ai framework llm openai python
Last synced: 7 days ago
JSON representation
A lightweight, powerful framework for multi-agent workflows
- Host: GitHub
- URL: https://github.com/openai/openai-agents-python
- Owner: openai
- License: mit
- Created: 2025-03-11T03:42:36.000Z (about 1 month ago)
- Default Branch: main
- Last Pushed: 2025-04-09T11:21:08.000Z (7 days ago)
- Last Synced: 2025-04-09T12:27:36.961Z (7 days ago)
- Topics: agents, ai, framework, llm, openai, python
- Language: Python
- Homepage: https://openai.github.io/openai-agents-python/
- Size: 3.73 MB
- Stars: 8,349
- Watchers: 123
- Forks: 1,013
- Open Issues: 136
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-ChatGPT-repositories - openai-agents-python - A lightweight, powerful framework for multi-agent workflows (NLP)
- Awesome-LLMOps - OpenAI Agents SDK - agent workflows.    (Agent / Framework)
- StarryDivineSky - openai/openai-agents-python - agents-python是一个轻量级且强大的多智能体工作流框架。它旨在简化构建复杂的多智能体系统的过程,允许开发者轻松创建和协调多个智能体之间的交互。该框架的核心优势在于其灵活性和可扩展性,可以适应各种不同的应用场景。通过该框架,开发者可以定义智能体的角色、目标和行为,并设计它们之间的通信协议。该项目提供了丰富的工具和示例,帮助开发者快速上手并构建自己的多智能体应用。它支持各种不同的智能体类型,包括基于语言模型的智能体和基于规则的智能体。该框架还提供了强大的调试和监控功能,方便开发者诊断和优化智能体系统的性能。总之,openai/openai-agents-python为开发者提供了一个高效且易用的平台,用于构建和部署复杂的多智能体系统。 (A01_文本生成_文本对话 / 大语言对话模型及数据)
- awesome-hacking-lists - openai/openai-agents-python - A lightweight, powerful framework for multi-agent workflows (Python)
- awesome-generative-ai-data-scientist - OpenAI Agents - agent workflows. | [GitHub](https://github.com/openai/openai-agents-python) | (LLM Providers)
README
# OpenAI Agents SDK
The OpenAI Agents SDK is a lightweight yet powerful framework for building multi-agent workflows.
### Core concepts:
1. [**Agents**](https://openai.github.io/openai-agents-python/agents): LLMs configured with instructions, tools, guardrails, and handoffs
2. [**Handoffs**](https://openai.github.io/openai-agents-python/handoffs/): A specialized tool call used by the Agents SDK for transferring control between agents
3. [**Guardrails**](https://openai.github.io/openai-agents-python/guardrails/): Configurable safety checks for input and output validation
4. [**Tracing**](https://openai.github.io/openai-agents-python/tracing/): Built-in tracking of agent runs, allowing you to view, debug and optimize your workflowsExplore the [examples](examples) directory to see the SDK in action, and read our [documentation](https://openai.github.io/openai-agents-python/) for more details.
Notably, our SDK [is compatible](https://openai.github.io/openai-agents-python/models/) with any model providers that support the OpenAI Chat Completions API format.
## Get started
1. Set up your Python environment
```
python -m venv env
source env/bin/activate
```2. Install Agents SDK
```
pip install openai-agents
```For voice support, install with the optional `voice` group: `pip install 'openai-agents[voice]'`.
## Hello world example
```python
from agents import Agent, Runneragent = Agent(name="Assistant", instructions="You are a helpful assistant")
result = Runner.run_sync(agent, "Write a haiku about recursion in programming.")
print(result.final_output)# Code within the code,
# Functions calling themselves,
# Infinite loop's dance.
```(_If running this, ensure you set the `OPENAI_API_KEY` environment variable_)
(_For Jupyter notebook users, see [hello_world_jupyter.py](examples/basic/hello_world_jupyter.py)_)
## Handoffs example
```python
from agents import Agent, Runner
import asynciospanish_agent = Agent(
name="Spanish agent",
instructions="You only speak Spanish.",
)english_agent = Agent(
name="English agent",
instructions="You only speak English",
)triage_agent = Agent(
name="Triage agent",
instructions="Handoff to the appropriate agent based on the language of the request.",
handoffs=[spanish_agent, english_agent],
)async def main():
result = await Runner.run(triage_agent, input="Hola, ¿cómo estás?")
print(result.final_output)
# ¡Hola! Estoy bien, gracias por preguntar. ¿Y tú, cómo estás?if __name__ == "__main__":
asyncio.run(main())
```## Functions example
```python
import asynciofrom agents import Agent, Runner, function_tool
@function_tool
def get_weather(city: str) -> str:
return f"The weather in {city} is sunny."agent = Agent(
name="Hello world",
instructions="You are a helpful agent.",
tools=[get_weather],
)async def main():
result = await Runner.run(agent, input="What's the weather in Tokyo?")
print(result.final_output)
# The weather in Tokyo is sunny.if __name__ == "__main__":
asyncio.run(main())
```## The agent loop
When you call `Runner.run()`, we run a loop until we get a final output.
1. We call the LLM, using the model and settings on the agent, and the message history.
2. The LLM returns a response, which may include tool calls.
3. If the response has a final output (see below for more on this), we return it and end the loop.
4. If the response has a handoff, we set the agent to the new agent and go back to step 1.
5. We process the tool calls (if any) and append the tool responses messages. Then we go to step 1.There is a `max_turns` parameter that you can use to limit the number of times the loop executes.
### Final output
Final output is the last thing the agent produces in the loop.
1. If you set an `output_type` on the agent, the final output is when the LLM returns something of that type. We use [structured outputs](https://platform.openai.com/docs/guides/structured-outputs) for this.
2. If there's no `output_type` (i.e. plain text responses), then the first LLM response without any tool calls or handoffs is considered as the final output.As a result, the mental model for the agent loop is:
1. If the current agent has an `output_type`, the loop runs until the agent produces structured output matching that type.
2. If the current agent does not have an `output_type`, the loop runs until the current agent produces a message without any tool calls/handoffs.## Common agent patterns
The Agents SDK is designed to be highly flexible, allowing you to model a wide range of LLM workflows including deterministic flows, iterative loops, and more. See examples in [`examples/agent_patterns`](examples/agent_patterns).
## Tracing
The Agents SDK automatically traces your agent runs, making it easy to track and debug the behavior of your agents. Tracing is extensible by design, supporting custom spans and a wide variety of external destinations, including [Logfire](https://logfire.pydantic.dev/docs/integrations/llms/openai/#openai-agents), [AgentOps](https://docs.agentops.ai/v1/integrations/agentssdk), [Braintrust](https://braintrust.dev/docs/guides/traces/integrations#openai-agents-sdk), [Scorecard](https://docs.scorecard.io/docs/documentation/features/tracing#openai-agents-sdk-integration), and [Keywords AI](https://docs.keywordsai.co/integration/development-frameworks/openai-agent). For more details about how to customize or disable tracing, see [Tracing](http://openai.github.io/openai-agents-python/tracing), which also includes a larger list of [external tracing processors](http://openai.github.io/openai-agents-python/tracing/#external-tracing-processors-list).
## Development (only needed if you need to edit the SDK/examples)
0. Ensure you have [`uv`](https://docs.astral.sh/uv/) installed.
```bash
uv --version
```1. Install dependencies
```bash
make sync
```2. (After making changes) lint/test
```
make tests # run tests
make mypy # run typechecker
make lint # run linter
```## Acknowledgements
We'd like to acknowledge the excellent work of the open-source community, especially:
- [Pydantic](https://docs.pydantic.dev/latest/) (data validation) and [PydanticAI](https://ai.pydantic.dev/) (advanced agent framework)
- [MkDocs](https://github.com/squidfunk/mkdocs-material)
- [Griffe](https://github.com/mkdocstrings/griffe)
- [uv](https://github.com/astral-sh/uv) and [ruff](https://github.com/astral-sh/ruff)We're committed to continuing to build the Agents SDK as an open source framework so others in the community can expand on our approach.