https://github.com/langchain-ai/langgraph-supervisor-py
https://github.com/langchain-ai/langgraph-supervisor-py
Last synced: 27 days ago
JSON representation
- Host: GitHub
- URL: https://github.com/langchain-ai/langgraph-supervisor-py
- Owner: langchain-ai
- License: mit
- Created: 2025-02-05T20:56:08.000Z (3 months ago)
- Default Branch: main
- Last Pushed: 2025-04-02T15:25:06.000Z (about 1 month ago)
- Last Synced: 2025-04-11T19:14:11.187Z (27 days ago)
- Language: Python
- Size: 608 KB
- Stars: 744
- Watchers: 12
- Forks: 111
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-LangGraph - langgraph-supervisor-py - supervisor](https://github.com/langchain-ai/langgraphjs/tree/main/libs/langgraph-supervisor) | (Official Resources / Pre-built Agents)
README
# 🤖 LangGraph Multi-Agent Supervisor
A Python library for creating hierarchical multi-agent systems using [LangGraph](https://github.com/langchain-ai/langgraph). Hierarchical systems are a type of [multi-agent](https://langchain-ai.github.io/langgraph/concepts/multi_agent) architecture where specialized agents are coordinated by a central **supervisor** agent. The supervisor controls all communication flow and task delegation, making decisions about which agent to invoke based on the current context and task requirements.
## Features
- 🤖 **Create a supervisor agent** to orchestrate multiple specialized agents
- 🛠️ **Tool-based agent handoff mechanism** for communication between agents
- 📝 **Flexible message history management** for conversation controlThis library is built on top of [LangGraph](https://github.com/langchain-ai/langgraph), a powerful framework for building agent applications, and comes with out-of-box support for [streaming](https://langchain-ai.github.io/langgraph/how-tos/#streaming), [short-term and long-term memory](https://langchain-ai.github.io/langgraph/concepts/memory/) and [human-in-the-loop](https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/)
## Installation
```bash
pip install langgraph-supervisor
```> [!Note]
> LangGraph Supervisor requires Python >= 3.10## Quickstart
Here's a simple example of a supervisor managing two specialized agents:

```bash
pip install langgraph-supervisor langchain-openaiexport OPENAI_API_KEY=
``````python
from langchain_openai import ChatOpenAIfrom langgraph_supervisor import create_supervisor
from langgraph.prebuilt import create_react_agentmodel = ChatOpenAI(model="gpt-4o")
# Create specialized agents
def add(a: float, b: float) -> float:
"""Add two numbers."""
return a + bdef multiply(a: float, b: float) -> float:
"""Multiply two numbers."""
return a * bdef web_search(query: str) -> str:
"""Search the web for information."""
return (
"Here are the headcounts for each of the FAANG companies in 2024:\n"
"1. **Facebook (Meta)**: 67,317 employees.\n"
"2. **Apple**: 164,000 employees.\n"
"3. **Amazon**: 1,551,000 employees.\n"
"4. **Netflix**: 14,000 employees.\n"
"5. **Google (Alphabet)**: 181,269 employees."
)math_agent = create_react_agent(
model=model,
tools=[add, multiply],
name="math_expert",
prompt="You are a math expert. Always use one tool at a time."
)research_agent = create_react_agent(
model=model,
tools=[web_search],
name="research_expert",
prompt="You are a world class researcher with access to web search. Do not do any math."
)# Create supervisor workflow
workflow = create_supervisor(
[research_agent, math_agent],
model=model,
prompt=(
"You are a team supervisor managing a research expert and a math expert. "
"For current events, use research_agent. "
"For math problems, use math_agent."
)
)# Compile and run
app = workflow.compile()
result = app.invoke({
"messages": [
{
"role": "user",
"content": "what's the combined headcount of the FAANG companies in 2024?"
}
]
})
```## Message History Management
You can control how agent messages are added to the overall conversation history of the multi-agent system:
Include full message history from an agent:

```python
workflow = create_supervisor(
agents=[agent1, agent2],
output_mode="full_history"
)
```Include only the final agent response:

```python
workflow = create_supervisor(
agents=[agent1, agent2],
output_mode="last_message"
)
```## Multi-level Hierarchies
You can create multi-level hierarchical systems by creating a supervisor that manages multiple supervisors.
```python
research_team = create_supervisor(
[research_agent, math_agent],
model=model,
supervisor_name="research_supervisor"
).compile(name="research_team")writing_team = create_supervisor(
[writing_agent, publishing_agent],
model=model,
supervisor_name="writing_supervisor"
).compile(name="writing_team")top_level_supervisor = create_supervisor(
[research_team, writing_team],
model=model,
supervisor_name="top_level_supervisor"
).compile(name="top_level_supervisor")
```## Adding Memory
You can add [short-term](https://langchain-ai.github.io/langgraph/how-tos/persistence/) and [long-term](https://langchain-ai.github.io/langgraph/how-tos/cross-thread-persistence/) [memory](https://langchain-ai.github.io/langgraph/concepts/memory/) to your supervisor multi-agent system. Since `create_supervisor()` returns an instance of `StateGraph` that needs to be compiled before use, you can directly pass a [checkpointer](https://langchain-ai.github.io/langgraph/reference/checkpoints/#langgraph.checkpoint.base.BaseCheckpointSaver) or a [store](https://langchain-ai.github.io/langgraph/reference/store/#langgraph.store.base.BaseStore) instance to the `.compile()` method:
```python
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.store.memory import InMemoryStorecheckpointer = InMemorySaver()
store = InMemoryStore()model = ...
research_agent = ...
math_agent = ...workflow = create_supervisor(
[research_agent, math_agent],
model=model,
prompt="You are a team supervisor managing a research expert and a math expert.",
)# Compile with checkpointer/store
app = workflow.compile(
checkpointer=checkpointer,
store=store
)
```## How to customize
### Customizing handoff tools
By default, the supervisor uses handoff tools created with the prebuilt `create_handoff_tool`. You can also create your own, custom handoff tools. Here are some ideas on how you can modify the default implementation:
* change tool name and/or description
* add tool call arguments for the LLM to populate, for example a task description for the next agent
* change what data is passed to the subagent as part of the handoff: by default `create_handoff_tool` passes **full** message history (all of the messages generated in the supervisor up to this point), as well as a tool message indicating successful handoff.Here is an example of how to pass customized handoff tools to `create_supervisor`:
```python
from langgraph_supervisor import create_handoff_tool
workflow = create_supervisor(
[research_agent, math_agent],
tools=[
create_handoff_tool(agent_name="math_expert", name="assign_to_math_expert", description="Assign task to math expert"),
create_handoff_tool(agent_name="research_expert", name="assign_to_research_expert", description="Assign task to research expert")
],
model=model,
)
```Here is an example of what a custom handoff tool might look like:
```python
from typing import Annotatedfrom langchain_core.tools import tool, BaseTool, InjectedToolCallId
from langchain_core.messages import ToolMessage
from langgraph.types import Command
from langgraph.prebuilt import InjectedStatedef create_custom_handoff_tool(*, agent_name: str, name: str | None, description: str | None) -> BaseTool:
@tool(name, description=description)
def handoff_to_agent(
# you can add additional tool call arguments for the LLM to populate
# for example, you can ask the LLM to populate a task description for the next agent
task_description: Annotated[str, "Detailed description of what the next agent should do, including all of the relevant context."],
# you can inject the state of the agent that is calling the tool
state: Annotated[dict, InjectedState],
tool_call_id: Annotated[str, InjectedToolCallId],
):
tool_message = ToolMessage(
content=f"Successfully transferred to {agent_name}",
name=name,
tool_call_id=tool_call_id,
)
messages = state["messages"]
return Command(
goto=agent_name,
graph=Command.PARENT,
# NOTE: this is a state update that will be applied to the swarm multi-agent graph (i.e., the PARENT graph)
update={
"messages": messages + [tool_message],
"active_agent": agent_name,
# optionally pass the task description to the next agent
# NOTE: individual agents would need to have `task_description` in their state schema
# and would need to implement logic for how to consume it
"task_description": task_description,
},
)return handoff_to_agent
```## Using Functional API
Here's a simple example of a supervisor managing two specialized agentic workflows created using Functional API:
```bash
pip install langgraph-supervisor langchain-openaiexport OPENAI_API_KEY=
``````python
from langgraph.prebuilt import create_react_agent
from langgraph_supervisor import create_supervisorfrom langchain_openai import ChatOpenAI
from langgraph.func import entrypoint, task
from langgraph.graph import add_messagesmodel = ChatOpenAI(model="gpt-4o")
# Create specialized agents
# Functional API - Agent 1 (Joke Generator)
@task
def generate_joke(messages):
"""First LLM call to generate initial joke"""
system_message = {
"role": "system",
"content": "Write a short joke"
}
msg = model.invoke(
[system_message] + messages
)
return msg@entrypoint()
def joke_agent(state):
joke = generate_joke(state['messages']).result()
messages = add_messages(state["messages"], [joke])
return {"messages": messages}joke_agent.name = "joke_agent"
# Graph API - Agent 2 (Research Expert)
def web_search(query: str) -> str:
"""Search the web for information."""
return (
"Here are the headcounts for each of the FAANG companies in 2024:\n"
"1. **Facebook (Meta)**: 67,317 employees.\n"
"2. **Apple**: 164,000 employees.\n"
"3. **Amazon**: 1,551,000 employees.\n"
"4. **Netflix**: 14,000 employees.\n"
"5. **Google (Alphabet)**: 181,269 employees."
)research_agent = create_react_agent(
model=model,
tools=[web_search],
name="research_expert",
prompt="You are a world class researcher with access to web search. Do not do any math."
)# Create supervisor workflow
workflow = create_supervisor(
[research_agent, joke_agent],
model=model,
prompt=(
"You are a team supervisor managing a research expert and a joke expert. "
"For current events, use research_agent. "
"For any jokes, use joke_agent."
)
)# Compile and run
app = workflow.compile()
result = app.invoke({
"messages": [
{
"role": "user",
"content": "Share a joke to relax and start vibe coding for my next project idea."
}
]
})for m in result["messages"]:
m.pretty_print()
```