https://github.com/langchain-ai/langgraph-supervisor
https://github.com/langchain-ai/langgraph-supervisor
Last synced: about 2 months ago
JSON representation
- Host: GitHub
- URL: https://github.com/langchain-ai/langgraph-supervisor
- Owner: langchain-ai
- License: mit
- Created: 2025-02-05T20:56:08.000Z (2 months ago)
- Default Branch: main
- Last Pushed: 2025-02-18T16:31:58.000Z (about 2 months ago)
- Last Synced: 2025-02-18T17:25:48.347Z (about 2 months ago)
- Language: Python
- Size: 466 KB
- Stars: 312
- Watchers: 7
- Forks: 28
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-generative-ai-data-scientist - LangGraph Supervisor - agent systems using LangGraph. | [GitHub](https://github.com/langchain-ai/langgraph-supervisor) | (LangGraph Extensions)
README
# 🤖 LangGraph Multi-Agent Supervisor
A Python library for creating hierarchical multi-agent systems using [LangGraph](https://github.com/langchain-ai/langgraph). Hierarchical systems are a type of [multi-agent](https://langchain-ai.github.io/langgraph/concepts/multi_agent) architecture where specialized agents are coordinated by a central **supervisor** agent. The supervisor controls all communication flow and task delegation, making decisions about which agent to invoke based on the current context and task requirements.
## Features
- 🤖 **Create a supervisor agent** to orchestrate multiple specialized agents
- 🛠️ **Tool-based agent handoff mechanism** for communication between agents
- 📝 **Flexible message history management** for conversation controlThis library is built on top of [LangGraph](https://github.com/langchain-ai/langgraph), a powerful framework for building agent applications, and comes with out-of-box support for [streaming](https://langchain-ai.github.io/langgraph/how-tos/#streaming), [short-term and long-term memory](https://langchain-ai.github.io/langgraph/concepts/memory/) and [human-in-the-loop](https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/)
## Installation
```bash
pip install langgraph-supervisor
```## Quickstart
Here's a simple example of a supervisor managing two specialized agents:

```bash
pip install langgraph-supervisor langchain-openaiexport OPENAI_API_KEY=
``````python
from langchain_openai import ChatOpenAIfrom langgraph_supervisor import create_supervisor
from langgraph.prebuilt import create_react_agentmodel = ChatOpenAI(model="gpt-4o")
# Create specialized agents
def add(a: float, b: float) -> float:
"""Add two numbers."""
return a + bdef multiply(a: float, b: float) -> float:
"""Multiply two numbers."""
return a * bdef web_search(query: str) -> str:
"""Search the web for information."""
return (
"Here are the headcounts for each of the FAANG companies in 2024:\n"
"1. **Facebook (Meta)**: 67,317 employees.\n"
"2. **Apple**: 164,000 employees.\n"
"3. **Amazon**: 1,551,000 employees.\n"
"4. **Netflix**: 14,000 employees.\n"
"5. **Google (Alphabet)**: 181,269 employees."
)math_agent = create_react_agent(
model=model,
tools=[add, multiply],
name="math_expert",
prompt="You are a math expert. Always use one tool at a time."
)research_agent = create_react_agent(
model=model,
tools=[web_search],
name="research_expert",
prompt="You are a world class researcher with access to web search. Do not do any math."
)# Create supervisor workflow
workflow = create_supervisor(
[research_agent, math_agent],
model=model,
prompt=(
"You are a team supervisor managing a research expert and a math expert. "
"For current events, use research_agent. "
"For math problems, use math_agent."
)
)# Compile and run
app = workflow.compile()
result = app.invoke({
"messages": [
{
"role": "user",
"content": "what's the combined headcount of the FAANG companies in 2024?"
}
]
})
```## Message History Management
You can control how agent messages are added to the overall conversation history of the multi-agent system:
Include full message history from an agent:

```python
workflow = create_supervisor(
agents=[agent1, agent2],
output_mode="full_history"
)
```Include only the final agent response:

```python
workflow = create_supervisor(
agents=[agent1, agent2],
output_mode="last_message"
)
```## Multi-level Hierarchies
You can create multi-level hierarchical systems by creating a supervisor that manages multiple supervisors.
```python
research_team = create_supervisor(
[research_agent, math_agent],
model=model,
).compile(name="research_team")writing_team = create_supervisor(
[writing_agent, publishing_agent],
model=model,
).compile(name="writing_team")top_level_supervisor = create_supervisor(
[research_team, writing_team],
model=model,
).compile(name="top_level_supervisor")
```## Adding Memory
You can add [short-term](https://langchain-ai.github.io/langgraph/how-tos/persistence/) and [long-term](https://langchain-ai.github.io/langgraph/how-tos/cross-thread-persistence/) [memory](https://langchain-ai.github.io/langgraph/concepts/memory/) to your supervisor multi-agent system. Since `create_supervisor()` returns an instance of `StateGraph` that needs to be compiled before use, you can directly pass a [checkpointer](https://langchain-ai.github.io/langgraph/reference/checkpoints/#langgraph.checkpoint.base.BaseCheckpointSaver) or a [store](https://langchain-ai.github.io/langgraph/reference/store/#langgraph.store.base.BaseStore) instance to the `.compile()` method:
```python
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.store.memory import InMemoryStorecheckpointer = InMemorySaver()
store = InMemoryStore()model = ...
research_agent = ...
math_agent = ...workflow = create_supervisor(
[research_agent, math_agent],
model=model,
prompt="You are a team supervisor managing a research expert and a math expert.",
)# Compile with checkpointer/store
app = workflow.compile(
checkpointer=checkpointer,
store=store
)
```