https://github.com/langchain-ai/langgraph-swarm-py
https://github.com/langchain-ai/langgraph-swarm-py
Last synced: 7 days ago
JSON representation
- Host: GitHub
- URL: https://github.com/langchain-ai/langgraph-swarm-py
- Owner: langchain-ai
- License: mit
- Created: 2025-02-19T15:50:25.000Z (about 2 months ago)
- Default Branch: main
- Last Pushed: 2025-03-31T17:54:14.000Z (15 days ago)
- Last Synced: 2025-04-01T12:00:08.902Z (14 days ago)
- Language: Python
- Size: 675 KB
- Stars: 650
- Watchers: 7
- Forks: 84
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-LangGraph - langgraph-swarm-py - swarm](https://github.com/langchain-ai/langgraphjs/tree/main/libs/langgraph-swarm) | (Official Resources / Pre-built Agents)
- StarryDivineSky - langchain-ai/langgraph-swarm-py - Swarm是一个Python项目,旨在简化使用LangGraph构建智能体集群(Swarm)的过程。它提供了一个高级API,可以轻松创建、配置和管理多个智能体,这些智能体可以并行工作以解决复杂问题。该项目的核心是`Swarm`类,它允许用户定义智能体的数量、每个智能体的角色和目标,以及智能体之间的通信方式。LangGraph-Swarm特别适用于需要并行处理、知识共享和协作解决问题的场景,例如文档摘要、代码生成和数据分析。其工作原理是利用LangGraph的图结构来协调智能体之间的交互,确保任务的有效分配和结果的整合。项目特色包括易于使用的API、灵活的配置选项和强大的并行处理能力。通过LangGraph-Swarm,开发者可以快速构建强大的智能体集群,从而提高问题解决的效率和质量。它支持自定义智能体和通信协议,以满足各种应用场景的需求。 (A01_文本生成_文本对话 / 大语言对话模型及数据)
- awesome-generative-ai-data-scientist - LangGraph Swarm - style multi-agent systems using LangGraph. Agents dynamically hand off control to one another based on their specializations. | [GitHub](https://github.com/langchain-ai/langgraph-swarm-py) | (LangGraph Extensions)
README
# 🤖 LangGraph Multi-Agent Swarm
A Python library for creating swarm-style multi-agent systems using [LangGraph](https://github.com/langchain-ai/langgraph). A swarm is a type of [multi-agent](https://langchain-ai.github.io/langgraph/concepts/multi_agent) architecture where agents dynamically hand off control to one another based on their specializations. The system remembers which agent was last active, ensuring that on subsequent interactions, the conversation resumes with that agent.

## Features
- 🤖 **Multi-agent collaboration** - Enable specialized agents to work together and hand off context to each other
- 🛠️ **Customizable handoff tools** - Built-in tools for communication between agentsThis library is built on top of [LangGraph](https://github.com/langchain-ai/langgraph), a powerful framework for building agent applications, and comes with out-of-box support for [streaming](https://langchain-ai.github.io/langgraph/how-tos/#streaming), [short-term and long-term memory](https://langchain-ai.github.io/langgraph/concepts/memory/) and [human-in-the-loop](https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/)
## Installation
```bash
pip install langgraph-swarm
```## Quickstart
```bash
pip install langgraph-swarm langchain-openaiexport OPENAI_API_KEY=
``````python
from langchain_openai import ChatOpenAIfrom langgraph.checkpoint.memory import InMemorySaver
from langgraph.prebuilt import create_react_agent
from langgraph_swarm import create_handoff_tool, create_swarmmodel = ChatOpenAI(model="gpt-4o")
def add(a: int, b: int) -> int:
"""Add two numbers"""
return a + balice = create_react_agent(
model,
[add, create_handoff_tool(agent_name="Bob")],
prompt="You are Alice, an addition expert.",
name="Alice",
)bob = create_react_agent(
model,
[create_handoff_tool(agent_name="Alice", description="Transfer to Alice, she can help with math")],
prompt="You are Bob, you speak like a pirate.",
name="Bob",
)checkpointer = InMemorySaver()
workflow = create_swarm(
[alice, bob],
default_active_agent="Alice"
)
app = workflow.compile(checkpointer=checkpointer)config = {"configurable": {"thread_id": "1"}}
turn_1 = app.invoke(
{"messages": [{"role": "user", "content": "i'd like to speak to Bob"}]},
config,
)
print(turn_1)
turn_2 = app.invoke(
{"messages": [{"role": "user", "content": "what's 5 + 7?"}]},
config,
)
print(turn_2)
```## Memory
You can add [short-term](https://langchain-ai.github.io/langgraph/how-tos/persistence/) and [long-term](https://langchain-ai.github.io/langgraph/how-tos/cross-thread-persistence/) [memory](https://langchain-ai.github.io/langgraph/concepts/memory/) to your swarm multi-agent system. Since `create_swarm()` returns an instance of `StateGraph` that needs to be compiled before use, you can directly pass a [checkpointer](https://langchain-ai.github.io/langgraph/reference/checkpoints/#langgraph.checkpoint.base.BaseCheckpointSaver) or a [store](https://langchain-ai.github.io/langgraph/reference/store/#langgraph.store.base.BaseStore) instance to the `.compile()` method:
```python
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.store.memory import InMemoryStore# short-term memory
checkpointer = InMemorySaver()
# long-term memory
store = InMemoryStore()model = ...
alice = ...
bob = ...workflow = create_swarm(
[alice, bob],
default_active_agent="Alice"
)# Compile with checkpointer/store
app = workflow.compile(
checkpointer=checkpointer,
store=store
)
```> [!IMPORTANT]
> Adding [short-term memory](https://langchain-ai.github.io/langgraph/concepts/persistence/) is crucial for maintaining conversation state across multiple interactions. Without it, the swarm would "forget" which agent was last active and lose the conversation history. Make sure to always compile the swarm with a checkpointer if you plan to use it in multi-turn conversations; e.g., `workflow.compile(checkpointer=checkpointer)`.## How to customize
You can customize multi-agent swarm by changing either the [handoff tools](#customizing-handoff-tools) implementation or the [agent implementation](#customizing-agent-implementation).
### Customizing handoff tools
By default, the agents in the swarm are assumed to use handoff tools created with the prebuilt `create_handoff_tool`. You can also create your own, custom handoff tools. Here are some ideas on how you can modify the default implementation:
* change tool name and/or description
* add tool call arguments for the LLM to populate, for example a task description for the next agent
* change what data is passed to the next agent as part of the handoff: by default `create_handoff_tool` passes **full** message history (all of the messages generated in the swarm up to this point), as well as a tool message indicating successful handoff.Here is an example of what a custom handoff tool might look like:
```python
from typing import Annotatedfrom langchain_core.tools import tool, BaseTool, InjectedToolCallId
from langchain_core.messages import ToolMessage
from langgraph.types import Command
from langgraph.prebuilt import InjectedStatedef create_custom_handoff_tool(*, agent_name: str, name: str | None, description: str | None) -> BaseTool:
@tool(name, description=description)
def handoff_to_agent(
# you can add additional tool call arguments for the LLM to populate
# for example, you can ask the LLM to populate a task description for the next agent
task_description: Annotated[str, "Detailed description of what the next agent should do, including all of the relevant context."],
# you can inject the state of the agent that is calling the tool
state: Annotated[dict, InjectedState],
tool_call_id: Annotated[str, InjectedToolCallId],
):
tool_message = ToolMessage(
content=f"Successfully transferred to {agent_name}",
name=name,
tool_call_id=tool_call_id,
)
# you can use a different messages state key here, if your agent uses a different schema
# e.g., "alice_messages" instead of "messages"
messages = state["messages"]
return Command(
goto=agent_name,
graph=Command.PARENT,
# NOTE: this is a state update that will be applied to the swarm multi-agent graph (i.e., the PARENT graph)
update={
"messages": messages + [tool_message],
"active_agent": agent_name,
# optionally pass the task description to the next agent
"task_description": task_description,
},
)return handoff_to_agent
```> [!IMPORTANT]
> If you are implementing custom handoff tools that return `Command`, you need to ensure that:
(1) your agent has a tool-calling node that can handle tools returning `Command` (like LangGraph's prebuilt [`ToolNode`](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.tool_node.ToolNode))
(2) both the swarm graph and the next agent graph have the [state schema](https://langchain-ai.github.io/langgraph/concepts/low_level#schema) containing the keys you want to update in `Command.update`### Customizing agent implementation
By default, individual agents are expected to communicate over a single `messages` key that is shared by all agents and the overall multi-agent swarm graph. This means that messages from **all** of the agents will be combined into a single, shared list of messages. This might not be desirable if you don't want to expose an agent's internal history of messages. To change this, you can customize the agent by taking the following steps:
1. use custom [state schema](https://langchain-ai.github.io/langgraph/concepts/low_level#schema) with a different key for messages, for example `alice_messages`
1. write a wrapper that converts the parent graph state to the child agent state and back (see this [how-to](https://langchain-ai.github.io/langgraph/how-tos/subgraph-transform-state/) guide)```python
from typing_extensions import TypedDict, Annotatedfrom langchain_core.messages import AnyMessage
from langgraph.graph import StateGraph, add_messages
from langgraph_swarm import SwarmStateclass AliceState(TypedDict):
alice_messages: Annotated[list[AnyMessage], add_messages]# see this guide to learn how you can implement a custom tool-calling agent
# https://langchain-ai.github.io/langgraph/how-tos/react-agent-from-scratch/
alice = (
StateGraph(AliceState)
.add_node("model", ...)
.add_node("tools", ...)
.add_edge(...)
...
.compile()
)# wrapper calling the agent
def call_alice(state: SwarmState):
# you can put any input transformation from parent state -> agent state
# for example, you can invoke "alice" with "task_description" populated by the LLM
response = alice.invoke({"alice_messages": state["messages"]})
# you can put any output transformation from agent state -> parent state
return {"messages": response["alice_messages"]}def call_bob(state: SwarmState):
...
```Then, you can create the swarm manually in the following way:
```python
from langgraph_swarm import add_active_agent_routerworkflow = (
StateGraph(SwarmState)
.add_node("Alice", call_alice, destinations=("Bob",))
.add_node("Bob", call_bob, destinations=("Alice",))
)
# this is the router that enables us to keep track of the last active agent
workflow = add_active_agent_router(
builder=workflow,
route_to=["Alice", "Bob"],
default_active_agent="Alice",
)# compile the workflow
app = workflow.compile()
```