{"id":27483112,"url":"https://github.com/evalstate/fast-agent","last_synced_at":"2026-04-10T03:25:14.975Z","repository":{"id":273128788,"uuid":"918778709","full_name":"evalstate/fast-agent","owner":"evalstate","description":"Define, Prompt and Test MCP enabled Agents and Workflows","archived":false,"fork":false,"pushed_at":"2026-02-14T19:46:50.000Z","size":14195,"stargazers_count":3666,"open_issues_count":47,"forks_count":392,"subscribers_count":24,"default_branch":"main","last_synced_at":"2026-02-15T02:40:40.156Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"https://fast-agent.ai","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":"lastmile-ai/mcp-agent","license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/evalstate.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":"NOTICE","maintainers":null,"copyright":null,"agents":"AGENTS.md","dco":null,"cla":null}},"created_at":"2025-01-18T20:39:51.000Z","updated_at":"2026-02-14T22:44:21.000Z","dependencies_parsed_at":"2026-01-27T02:00:57.750Z","dependency_job_id":null,"html_url":"https://github.com/evalstate/fast-agent","commit_stats":null,"previous_names":["evalstate/mcp-agent","evalstate/fast-agent"],"tags_count":167,"template":false,"template_full_name":null,"purl":"pkg:github/evalstate/fast-agent","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/evalstate%2Ffast-agent","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/evalstate%2Ffast-agent/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/evalstate%2Ffast-agent/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/evalstate%2Ffast-agent/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/evalstate","download_url":"https://codeload.github.com/evalstate/fast-agent/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/evalstate%2Ffast-agent/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29566366,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-02-18T00:47:08.760Z","status":"online","status_checked_at":"2026-02-18T02:00:09.468Z","response_time":162,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-04-16T15:01:41.315Z","updated_at":"2026-04-10T03:25:14.759Z","avatar_url":"https://github.com/evalstate.png","language":"Python","readme":"\u003cp align=\"center\"\u003e\n\u003ca href=\"https://pypi.org/project/fast-agent-mcp/\"\u003e\u003cimg src=\"https://img.shields.io/pypi/v/fast-agent-mcp?color=%2334D058\u0026label=pypi\" /\u003e\u003c/a\u003e\n\u003ca href=\"#\"\u003e\u003cimg src=\"https://github.com/evalstate/fast-agent/actions/workflows/main-checks.yml/badge.svg\" /\u003e\u003c/a\u003e\n\u003ca href=\"https://github.com/evalstate/fast-agent/issues\"\u003e\u003cimg src=\"https://img.shields.io/github/issues-raw/evalstate/fast-agent\" /\u003e\u003c/a\u003e\n\u003ca href=\"https://discord.gg/xg5cJ7ndN6\"\u003e\u003cimg src=\"https://img.shields.io/discord/1358470293990936787\" alt=\"discord\" /\u003e\u003c/a\u003e\n\u003cimg alt=\"Pepy Total Downloads\" src=\"https://img.shields.io/pepy/dt/fast-agent-mcp?label=pypi%20%7C%20downloads\"/\u003e\n\u003ca href=\"https://github.com/evalstate/fast-agent-mcp/blob/main/LICENSE\"\u003e\u003cimg src=\"https://img.shields.io/pypi/l/fast-agent-mcp\" /\u003e\u003c/a\u003e\n\u003c/p\u003e\n\n## Start Here\n\n\u003e [!TIP]\n\u003e Please see https://fast-agent.ai for latest documentation.\n\n**`fast-agent`** is a flexible way to interact with LLMs, excellent for use as a Coding Agent, Development Toolkit, Evaluation or Workflow platform.\n\nTo start an interactive session with shell support, install [uv](https://astral.sh/uv) and run\n\n```bash\nuvx fast-agent-mcp@latest -x\n```\n\nTo start coding with Hugging Face inference providers or use your OpenAI Codex plan:\n\n```bash\n# Code with Hugging Face Inference Providers\nuvx fast-agent-mcp@latest --pack hf-dev\n\n# Code with Codex (agents optimized for OpenAI)\nuvx fast-agent-mcp@latest --pack codex\n```\n\nEnter a shell with `!`, or run shell commands e.g. `! cd web \u0026\u0026 npm run build`.\n\nManage skills with the `/skills` command, and connect to MCP Servers with `/connect`. The default **`fast-agent`** registry contains skills to let you set up LSP, Agent and Tool Hooks, Compaction strategies, Automation and more.\n\n```bash\n# /connect supports stdio or streamable http (with OAuth)\n\n# Start a STDIO server\n/connect @modelcontextprotocol/server-everything\n\n# Connect to a Streamable HTTP Server\n/connect https://huggingface.co/mcp\n```\n\nIt's recommended to install **`fast-agent`** to set up the shell aliases and other tooling.\n\n```bash\n# Install fast-agent\nuv tool install -U fast-agent-mcp\n\n# Run fast-agent with opus, shell support and subagent/smart mode\nfast-agent --model opus -x --smart\n```\n\nUse local models with the generic provider, or automatically create the correct configuration for `llama.cpp`:\n\n```bash\nfast-agent model llamacpp\n```\n\nAny **`fast-agent`** setup or program can be used with any ACP client - the simplest way is to use `fast-agent-acp`:\n\n```bash\n# Run fast-agent inside Toad\ntoad acp \"fast-agent-acp -x --model sonnet\"\n```\n\n**`fast-agent`** enables you to create and interact with sophisticated multimodal Agents and Workflows in minutes. It is the first framework with complete, end-to-end tested MCP Feature support including Sampling and Elicitations.\n\n`fast-agent` is CLI-first, with an optional prompt_toolkit-powered interactive terminal prompt (TUI-style input, completions, and in-terminal menus); responses can stream live to the terminal via rich without relying on full-screen curses UIs or external GUI overlays.\n\n\u003c!-- ![multi_model_trim](https://github.com/user-attachments/assets/c8bf7474-2c41-4ef3-8924-06e29907d7c6) --\u003e\n\nThe simple declarative syntax lets you concentrate on composing your Prompts and MCP Servers to [build effective agents](https://www.anthropic.com/research/building-effective-agents).\n\nModel support is comprehensive with native support for Anthropic, OpenAI and Google providers as well as Azure, Ollama, Deepseek and dozens of others via TensorZero. Structured Outputs, PDF and Vision support is simple to use and well tested. Passthrough and Playback LLMs enable rapid development and test of Python glue-code for your applications.\n\nRecent features include:\n\n- Agent Skills (SKILL.md)\n- MCP-UI Support |\n- OpenAI Apps SDK (Skybridge)\n- Shell Mode\n- Advanced MCP Transport Diagnsotics\n- MCP Elicitations\n\n\u003cimg width=\"800\"  alt=\"MCP Transport Diagnostics\" src=\"https://github.com/user-attachments/assets/e26472de-58d9-4726-8bdd-01eb407414cf\" /\u003e\n\n`fast-agent` is the only tool that allows you to inspect Streamable HTTP Transport usage - a critical feature for ensuring reliable, compliant deployments. OAuth is supported with KeyRing storage for secrets. Use the `fast-agent auth` command to manage.\n\n\u003e [!IMPORTANT]\n\u003e\n\u003e Documentation is included as a submodule. When cloning, use `--recurse-submodules` to get everything:\n\u003e\n\u003e ```bash\n\u003e git clone --recurse-submodules https://github.com/evalstate/fast-agent.git\n\u003e ```\n\u003e\n\u003e Or if you've already cloned:\n\u003e\n\u003e ```bash\n\u003e git submodule update --init --recursive\n\u003e ```\n\u003e\n\u003e The documentation source is also available at: https://github.com/evalstate/fast-agent-docs\n\n### Agent Application Development\n\nPrompts and configurations that define your Agent Applications are stored in simple files, with minimal boilerplate, enabling simple management and version control.\n\nChat with individual Agents and Components before, during and after workflow execution to tune and diagnose your application. Agents can request human input to get additional context for task completion.\n\nSimple model selection makes testing Model \u003c-\u003e MCP Server interaction painless. You can read more about the motivation behind this project [here](https://llmindset.co.uk/resources/fast-agent/)\n\n![2025-03-23-fast-agent](https://github.com/user-attachments/assets/8f6dbb69-43e3-4633-8e12-5572e9614728)\n\n## Get started:\n\nStart by installing the [uv package manager](https://docs.astral.sh/uv/) for Python. Then:\n\n```bash\nuv pip install fast-agent-mcp          # install fast-agent!\nfast-agent go                          # start an interactive session\nfast-agent go --url https://hf.co/mcp  # with a remote MCP\nfast-agent go --model=generic.qwen2.5  # use ollama qwen 2.5\nfast-agent go --pack analyst --model haiku  # install/reuse a card pack and launch it\nfast-agent scaffold                    # create an example agent and config files\nuv run agent.py                        # run your first agent\nuv run agent.py --model='o3-mini?reasoning=low'    # specify a model\nuv run agent.py --transport http --port 8001  # expose as MCP server (server mode implied)\nfast-agent quickstart workflow  # create \"building effective agents\" examples\n```\n\n`--server` remains available for backward compatibility but is deprecated; `--transport` now automatically switches an agent into server mode.\n\nFor packaged starter agents, use `fast-agent go --pack \u003cname\u003e --model \u003cmodel\u003e`.\nThis installs the pack into the selected fast-agent environment if needed, then\nstarts `go` normally. `--model` is a fallback for cards without an explicit\nmodel setting; a model declared directly in an AgentCard still wins.\n\nOther quickstart examples include a Researcher Agent (with Evaluator-Optimizer workflow) and Data Analysis Agent (similar to the ChatGPT experience), demonstrating MCP Roots support.\n\n\u003e [!TIP]\n\u003e Windows Users - there are a couple of configuration changes needed for the Filesystem and Docker MCP Servers - necessary changes are detailed within the configuration files.\n\n### Basic Agents\n\nDefining an agent is as simple as:\n\n```python\n@fast.agent(\n  instruction=\"Given an object, respond only with an estimate of its size.\"\n)\n```\n\nWe can then send messages to the Agent:\n\n```python\nasync with fast.run() as agent:\n  moon_size = await agent(\"the moon\")\n  print(moon_size)\n```\n\nOr start an interactive chat with the Agent:\n\n```python\nasync with fast.run() as agent:\n  await agent.interactive()\n```\n\nHere is the complete `sizer.py` Agent application, with boilerplate code:\n\n```python\nimport asyncio\nfrom fast_agent import FastAgent\n\n# Create the application\nfast = FastAgent(\"Agent Example\")\n\n@fast.agent(\n  instruction=\"Given an object, respond only with an estimate of its size.\"\n)\nasync def main():\n  async with fast.run() as agent:\n    await agent.interactive()\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\nThe Agent can then be run with `uv run sizer.py`.\n\nSpecify a model with the `--model` switch - for example `uv run sizer.py --model sonnet`.\n\nModel strings also accept query overrides. For example:\n\n- `uv run sizer.py --model \"gpt-5?reasoning=low\"`\n- `uv run sizer.py --model \"claude-sonnet-4-6?web_search=on\"`\n- `uv run sizer.py --model \"claude-sonnet-4-5?context=1m\"`\n\nFor Anthropic models, `?context=1m` is only needed for earlier Sonnet 4 / Sonnet 4.5\nmodels that still require the explicit 1M context opt-in. Claude Sonnet 4.6 and\nClaude Opus 4.6 already use their long context window by default, so `?context=1m`\nis accepted for backward compatibility but is unnecessary there.\n\n### Combining Agents and using MCP Servers\n\n_To generate examples use `fast-agent quickstart workflow`. This example can be run with `uv run workflow/chaining.py`. fast-agent looks for configuration files in the current directory before checking parent directories recursively._\n\nAgents can be chained to build a workflow, using MCP Servers defined in the `fastagent.config.yaml` file:\n\n```python\n@fast.agent(\n    \"url_fetcher\",\n    \"Given a URL, provide a complete and comprehensive summary\",\n    servers=[\"fetch\"], # Name of an MCP Server defined in fastagent.config.yaml\n)\n@fast.agent(\n    \"social_media\",\n    \"\"\"\n    Write a 280 character social media post for any given text.\n    Respond only with the post, never use hashtags.\n    \"\"\",\n)\n@fast.chain(\n    name=\"post_writer\",\n    sequence=[\"url_fetcher\", \"social_media\"],\n)\nasync def main():\n    async with fast.run() as agent:\n        # using chain workflow\n        await agent.post_writer(\"http://llmindset.co.uk\")\n```\n\nAll Agents and Workflows respond to `.send(\"message\")` or `.prompt()` to begin a chat session.\n\nSaved as `social.py` we can now run this workflow from the command line with:\n\n```bash\nuv run workflow/chaining.py --agent post_writer --message \"\u003curl\u003e\"\n```\n\nAdd the `--quiet` switch to disable progress and message display and return only the final response - useful for simple automations.\n\n### MAKER\n\nMAKER (“Massively decomposed Agentic processes with K-voting Error Reduction”) wraps a worker agent and samples it repeatedly until a response achieves a k-vote margin over all alternatives (“first-to-ahead-by-k” voting). This is useful for long chains of simple steps where rare errors would otherwise compound.\n\n- Reference: [Solving a Million-Step LLM Task with Zero Errors](https://arxiv.org/abs/2511.09030)\n- Credit: Lucid Programmer (PR author)\n\n```python\n@fast.agent(\n  name=\"classifier\",\n  instruction=\"Reply with only: A, B, or C.\",\n)\n@fast.maker(\n  name=\"reliable_classifier\",\n  worker=\"classifier\",\n  k=3,\n  max_samples=25,\n  match_strategy=\"normalized\",\n  red_flag_max_length=16,\n)\nasync def main():\n  async with fast.run() as agent:\n    await agent.reliable_classifier.send(\"Classify: ...\")\n```\n\n### Agents As Tools\n\nThe Agents As Tools workflow takes a complex task, breaks it into subtasks, and calls other agents as tools based on the main agent instruction.\n\nThis pattern is inspired by the OpenAI Agents SDK [Agents as tools](https://openai.github.io/openai-agents-python/tools/#agents-as-tools) feature.\n\nWith child agents exposed as tools, you can implement routing, parallelization, and orchestrator-workers [decomposition](https://www.anthropic.com/engineering/building-effective-agents) directly in the instruction (and combine them). Multiple tool calls per turn are supported and executed in parallel.\n\nCommon usage patterns may combine:\n\n- Routing: choose the right specialist tool(s) based on the user prompt.\n- Parallelization: fan out over independent items/projects, then aggregate.\n- Orchestrator-workers: break a task into scoped subtasks (often via a simple JSON plan), then coordinate execution.\n\n```python\n@fast.agent(\n    name=\"NY-Project-Manager\",\n    instruction=\"Return NY time + timezone, plus a one-line project status.\",\n    servers=[\"time\"],\n)\n@fast.agent(\n    name=\"London-Project-Manager\",\n    instruction=\"Return London time + timezone, plus a one-line news update.\",\n    servers=[\"time\"],\n)\n@fast.agent(\n    name=\"PMO-orchestrator\",\n    instruction=(\n        \"Get reports. Always use one tool call per project/news. \"  # parallelization\n        \"Responsibilities: NY projects: [OpenAI, Fast-Agent, Anthropic]. London news: [Economics, Art, Culture]. \"  # routing\n        \"Aggregate results and add a one-line PMO summary.\"\n    ),\n    default=True,\n    agents=[\"NY-Project-Manager\", \"London-Project-Manager\"],  # orchestrator-workers\n)\nasync def main() -\u003e None:\n    async with fast.run() as agent:\n        await agent(\"Get PMO report. Projects: all. News: Art, Culture\")\n```\n\nExtended example and all params sample is available in the repository as\n[`examples/workflows/agents_as_tools_extended.py`](examples/workflows/agents_as_tools_extended.py).\n\n## MCP OAuth (v2.1)\n\nFor SSE and HTTP MCP servers, OAuth is enabled by default with minimal configuration. A local callback server is used to capture the authorization code, with a paste-URL fallback if the port is unavailable.\n\n- Minimal per-server settings in `fastagent.config.yaml`:\n\n```yaml\nmcp:\n  servers:\n    myserver:\n      transport: http # or sse\n      url: http://localhost:8001/mcp # or /sse for SSE servers\n      auth:\n        oauth: true # default: true\n        redirect_port: 3030 # default: 3030\n        redirect_path: /callback # default: /callback\n        # scope: \"user\"       # optional; if omitted, server defaults are used\n```\n\n- The OAuth client uses PKCE and in-memory token storage (no tokens written to disk).\n- Token persistence: by default, tokens are stored securely in your OS keychain via `keyring`. If a keychain is unavailable (e.g., headless container), in-memory storage is used for the session.\n- To force in-memory only per server, set:\n\n```yaml\nmcp:\n  servers:\n    myserver:\n      transport: http\n      url: http://localhost:8001/mcp\n      auth:\n        oauth: true\n        persist: memory\n```\n\n- To disable OAuth for a specific server , set `auth.oauth: false` for that server.\n\n## MCP Ping (optional)\n\nThe MCP ping utility can be enabled by either peer (client or server). See the [Ping overview](https://modelcontextprotocol.io/specification/2025-11-25/basic/utilities/ping#overview).\n\nClient-side pinging is configured per server (default: 30s interval, 3 missed pings):\n\n```yaml\nmcp:\n  servers:\n    myserver:\n      ping_interval_seconds: 30 # optional; \u003c=0 disables\n      max_missed_pings: 3 # optional; consecutive timeouts before marking failed\n```\n\n## Workflows\n\n### Chain\n\nThe `chain` workflow offers a more declarative approach to calling Agents in sequence:\n\n```python\n\n@fast.chain(\n  \"post_writer\",\n   sequence=[\"url_fetcher\",\"social_media\"]\n)\n\n# we can them prompt it directly:\nasync with fast.run() as agent:\n  await agent.post_writer()\n\n```\n\nThis starts an interactive session, which produces a short social media post for a given URL. If a _chain_ is prompted it returns to a chat with last Agent in the chain. You can switch the agent to prompt by typing `@agent-name`.\n\nChains can be incorporated in other workflows, or contain other workflow elements (including other Chains). You can set an `instruction` to precisely describe it's capabilities to other workflow steps if needed.\n\n### Human Input\n\nAgents can request Human Input to assist with a task or get additional context:\n\n```python\n@fast.agent(\n    instruction=\"An AI agent that assists with basic tasks. Request Human Input when needed.\",\n    human_input=True,\n)\n\nawait agent(\"print the next number in the sequence\")\n```\n\nIn the example `human_input.py`, the Agent will prompt the User for additional information to complete the task.\n\n### Parallel\n\nThe Parallel Workflow sends the same message to multiple Agents simultaneously (`fan-out`), then uses the `fan-in` Agent to process the combined content.\n\n```python\n@fast.agent(\"translate_fr\", \"Translate the text to French\")\n@fast.agent(\"translate_de\", \"Translate the text to German\")\n@fast.agent(\"translate_es\", \"Translate the text to Spanish\")\n\n@fast.parallel(\n  name=\"translate\",\n  fan_out=[\"translate_fr\",\"translate_de\",\"translate_es\"]\n)\n\n@fast.chain(\n  \"post_writer\",\n   sequence=[\"url_fetcher\",\"social_media\",\"translate\"]\n)\n```\n\nIf you don't specify a `fan-in` agent, the `parallel` returns the combined Agent results verbatim.\n\n`parallel` is also useful to ensemble ideas from different LLMs.\n\nWhen using `parallel` in other workflows, specify an `instruction` to describe its operation.\n\n### Evaluator-Optimizer\n\nEvaluator-Optimizers combine 2 agents: one to generate content (the `generator`), and the other to judge that content and provide actionable feedback (the `evaluator`). Messages are sent to the generator first, then the pair run in a loop until either the evaluator is satisfied with the quality, or the maximum number of refinements is reached. The final result from the Generator is returned.\n\nIf the Generator has `use_history` off, the previous iteration is returned when asking for improvements - otherwise conversational context is used.\n\n```python\n@fast.evaluator_optimizer(\n  name=\"researcher\",\n  generator=\"web_searcher\",\n  evaluator=\"quality_assurance\",\n  min_rating=\"EXCELLENT\",\n  max_refinements=3\n)\n\nasync with fast.run() as agent:\n  await agent.researcher.send(\"produce a report on how to make the perfect espresso\")\n```\n\nWhen used in a workflow, it returns the last `generator` message as the result.\n\nSee the `evaluator.py` workflow example, or `fast-agent quickstart researcher` for a more complete example.\n\n### Router\n\nRouters use an LLM to assess a message, and route it to the most appropriate Agent. The routing prompt is automatically generated based on the Agent instructions and available Servers.\n\n```python\n@fast.router(\n  name=\"route\",\n  agents=[\"agent1\",\"agent2\",\"agent3\"]\n)\n```\n\nLook at the `router.py` workflow for an example.\n\n### Orchestrator\n\nGiven a complex task, the Orchestrator uses an LLM to generate a plan to divide the task amongst the available Agents. The planning and aggregation prompts are generated by the Orchestrator, which benefits from using more capable models. Plans can either be built once at the beginning (`plan_type=\"full\"`) or iteratively (`plan_type=\"iterative\"`).\n\n```python\n@fast.orchestrator(\n  name=\"orchestrate\",\n  agents=[\"task1\",\"task2\",\"task3\"]\n)\n```\n\nSee the `orchestrator.py` or `agent_build.py` workflow example.\n\n## Agent Features\n\n### Calling Agents\n\nAll definitions allow omitting the name and instructions arguments for brevity:\n\n```python\n@fast.agent(\"You are a helpful agent\")          # Create an agent with a default name.\n@fast.agent(\"greeter\",\"Respond cheerfully!\")    # Create an agent with the name \"greeter\"\n\nmoon_size = await agent(\"the moon\")             # Call the default (first defined agent) with a message\n\nresult = await agent.greeter(\"Good morning!\")   # Send a message to an agent by name using dot notation\nresult = await agent.greeter.send(\"Hello!\")     # You can call 'send' explicitly\n\nawait agent.greeter()                           # If no message is specified, a chat session will open\nawait agent.greeter.prompt()                    # that can be made more explicit\nawait agent.greeter.prompt(default_prompt=\"OK\") # and supports setting a default prompt\n\nagent[\"greeter\"].send(\"Good Evening!\")          # Dictionary access is supported if preferred\n```\n\n### Defining Agents\n\n#### Basic Agent\n\n```python\n@fast.agent(\n  name=\"agent\",                          # name of the agent\n  instruction=\"You are a helpful Agent\", # base instruction for the agent\n  servers=[\"filesystem\"],                # list of MCP Servers for the agent\n  model=\"o3-mini?reasoning=high\",        # specify a model for the agent\n  use_history=True,                      # agent maintains chat history\n  request_params=RequestParams(temperature= 0.7), # additional parameters for the LLM (or RequestParams())\n  human_input=True,                      # agent can request human input\n)\n```\n\n#### Chain\n\n```python\n@fast.chain(\n  name=\"chain\",                          # name of the chain\n  sequence=[\"agent1\", \"agent2\", ...],    # list of agents in execution order\n  instruction=\"instruction\",             # instruction to describe the chain for other workflows\n  cumulative=False,                      # whether to accumulate messages through the chain\n  continue_with_final=True,              # open chat with agent at end of chain after prompting\n)\n```\n\n#### Parallel\n\n```python\n@fast.parallel(\n  name=\"parallel\",                       # name of the parallel workflow\n  fan_out=[\"agent1\", \"agent2\"],          # list of agents to run in parallel\n  fan_in=\"aggregator\",                   # name of agent that combines results (optional)\n  instruction=\"instruction\",             # instruction to describe the parallel for other workflows\n  include_request=True,                  # include original request in fan-in message\n)\n```\n\n#### Evaluator-Optimizer\n\n```python\n@fast.evaluator_optimizer(\n  name=\"researcher\",                     # name of the workflow\n  generator=\"web_searcher\",              # name of the content generator agent\n  evaluator=\"quality_assurance\",         # name of the evaluator agent\n  min_rating=\"GOOD\",                     # minimum acceptable quality (EXCELLENT, GOOD, FAIR, POOR)\n  max_refinements=3,                     # maximum number of refinement iterations\n)\n```\n\n#### Router\n\n```python\n@fast.router(\n  name=\"route\",                          # name of the router\n  agents=[\"agent1\", \"agent2\", \"agent3\"], # list of agent names router can delegate to\n  model=\"o3-mini?reasoning=high\",        # specify routing model\n  use_history=False,                     # router maintains conversation history\n  human_input=False,                     # whether router can request human input\n)\n```\n\n#### Orchestrator\n\n```python\n@fast.orchestrator(\n  name=\"orchestrator\",                   # name of the orchestrator\n  instruction=\"instruction\",             # base instruction for the orchestrator\n  agents=[\"agent1\", \"agent2\"],           # list of agent names this orchestrator can use\n  model=\"o3-mini?reasoning=high\",        # specify orchestrator planning model\n  use_history=False,                     # orchestrator doesn't maintain chat history (no effect).\n  human_input=False,                     # whether orchestrator can request human input\n  plan_type=\"full\",                      # planning approach: \"full\" or \"iterative\"\n  plan_iterations=5,                     # maximum number of full plan attempts, or iterations\n)\n```\n\n#### MAKER\n\n```python\n@fast.maker(\n  name=\"maker\",                           # name of the workflow\n  worker=\"worker_agent\",                  # worker agent name\n  k=3,                                    # voting margin (first-to-ahead-by-k)\n  max_samples=50,                         # maximum number of samples\n  match_strategy=\"exact\",                 # exact|normalized|structured\n  red_flag_max_length=256,                # flag unusually long outputs\n  instruction=\"instruction\",              # optional instruction override\n)\n```\n\n#### Agents As Tools\n\n```python\n@fast.agent(\n  name=\"orchestrator\",                    # orchestrator agent name\n  instruction=\"instruction\",              # orchestrator instruction (routing/decomposition/aggregation)\n  agents=[\"agent1\", \"agent2\"],            # exposed as tools: agent__agent1, agent__agent2\n  max_parallel=128,                       # cap parallel child tool calls (OpenAI limit is 128)\n  child_timeout_sec=600,                  # per-child timeout (seconds)\n  max_display_instances=20,               # collapse progress display after top-N instances\n)\n```\n\n### Function Tools\n\nRegister Python functions as tools directly in code — no MCP server or external file needed. Both sync and async functions are supported. The function name and docstring are used as the tool name and description by default, or you can override them with `name=` and `description=`.\n\n**Per-agent tools (`@agent.tool`)** — scope a tool to a specific agent:\n\n```python\n@fast.agent(name=\"writer\", instruction=\"You write things.\")\nasync def writer(): ...\n\n@writer.tool\ndef translate(text: str, language: str) -\u003e str:\n    \"\"\"Translate text to the given language.\"\"\"\n    return f\"[{language}] {text}\"\n\n@writer.tool(name=\"summarize\", description=\"Produce a one-line summary\")\ndef summarize(text: str) -\u003e str:\n    return f\"Summary: {text[:80]}...\"\n```\n\n**Global tools (`@fast.tool`)** — available to all agents that don't declare their own tools:\n\n```python\n@fast.tool\ndef get_weather(city: str) -\u003e str:\n    \"\"\"Return the current weather for a city.\"\"\"\n    return f\"Sunny in {city}\"\n\n@fast.agent(name=\"assistant\", instruction=\"You are helpful.\")\n# assistant gets get_weather (global @fast.tool)\n```\n\nAgents with `@agent.tool` or `function_tools=` only see their own tools — globals are not injected. Use `function_tools=[]` to explicitly opt out of globals with no tools.\n\n### Multimodal Support\n\nAdd Resources to prompts using either the inbuilt `prompt-server` or MCP Types directly. Convenience class are made available to do so simply, for example:\n\n```python\n  summary: str =  await agent.with_resource(\n      \"Summarise this PDF please\",\n      \"mcp_server\",\n      \"resource://fast-agent/sample.pdf\",\n  )\n```\n\n#### MCP Tool Result Conversion\n\nLLM APIs have restrictions on the content types that can be returned as Tool Calls/Function results via their Chat Completions API's:\n\n- OpenAI supports Text\n- Anthropic supports Text and Image\n- Google supports Text, Image, PDF, and Video (e.g., `video/mp4`).\n  \u003e **Note**: Inline video data is limited to 20MB. For larger files, use the File API. YouTube URLs are supported directly.\n\nFor MCP Tool Results, `ImageResources` and `EmbeddedResources` are converted to User Messages and added to the conversation.\n\n### Prompts\n\nMCP Prompts are supported with `apply_prompt(name,arguments)`, which always returns an Assistant Message. If the last message from the MCP Server is a 'User' message, it is sent to the LLM for processing. Prompts applied to the Agent's Context are retained - meaning that with `use_history=False`, Agents can act as finely tuned responders.\n\nPrompts can also be applied interactively through the interactive interface by using the `/prompt` command.\n\n### Sampling\n\nSampling LLMs are configured per Client/Server pair. Specify the model name in fastagent.config.yaml as follows:\n\n```yaml\nmcp:\n  servers:\n    sampling_resource:\n      command: \"uv\"\n      args: [\"run\", \"sampling_resource_server.py\"]\n      sampling:\n        model: \"haiku\"\n```\n\n### Secrets File\n\n\u003e [!TIP]\n\u003e fast-agent will look recursively for a fastagent.secrets.yaml file, so you only need to manage this at the root folder of your agent definitions.\n\n### Interactive Shell\n\n![fast-agent](https://github.com/user-attachments/assets/3e692103-bf97-489a-b519-2d0fee036369)\n\n## Documentation\n\nThe documentation site is included as a submodule in `docs/`. To work with the docs locally:\n\n```bash\n# Install docs dependencies (first time only)\nuv run scripts/docs.py install\n\n# Generate reference docs from source code\nuv run scripts/docs.py generate\n\n# Run the dev server (http://127.0.0.1:8000)\nuv run scripts/docs.py serve\n\n# Or generate and serve in one command\nuv run scripts/docs.py all\n```\n\nThe generator extracts configuration field descriptions, model aliases, and API references directly from the source code to keep documentation in sync.\n\n## Project Notes\n\n`fast-agent` builds on the [`mcp-agent`](https://github.com/lastmile-ai/mcp-agent) project by Sarmad Qadri.\n\n### Contributing\n\nContributions and PRs are welcome - feel free to raise issues to discuss. Full guidelines for contributing and roadmap coming very soon. Get in touch!\n","funding_links":[],"categories":["🌐 Web Development - Frontend","Python","📚 Projects (1974 total)","HarmonyOS","Orchestration","💻 Terminal \u0026 CLI Agents"],"sub_categories":["MCP Servers","Windows Manager","Agent Framework"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fevalstate%2Ffast-agent","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fevalstate%2Ffast-agent","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fevalstate%2Ffast-agent/lists"}