{"id":43213768,"url":"https://modelcontextprotocol.github.io/python-sdk/","last_synced_at":"2026-02-12T06:01:33.323Z","repository":{"id":264668147,"uuid":"862584018","full_name":"modelcontextprotocol/python-sdk","owner":"modelcontextprotocol","description":"The official Python SDK for Model Context Protocol servers and clients","archived":false,"fork":false,"pushed_at":"2026-02-08T13:20:34.000Z","size":5529,"stargazers_count":21568,"open_issues_count":366,"forks_count":3069,"subscribers_count":151,"default_branch":"main","last_synced_at":"2026-02-09T04:43:06.895Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"https://modelcontextprotocol.github.io/python-sdk/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/modelcontextprotocol.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2024-09-24T21:01:35.000Z","updated_at":"2026-02-09T03:57:43.000Z","dependencies_parsed_at":"2024-11-25T17:31:22.609Z","dependency_job_id":"f1b65138-5ad7-4638-989c-e320f60d990f","html_url":"https://github.com/modelcontextprotocol/python-sdk","commit_stats":null,"previous_names":["modelcontextprotocol/python-sdk"],"tags_count":59,"template":false,"template_full_name":null,"purl":"pkg:github/modelcontextprotocol/python-sdk","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/modelcontextprotocol%2Fpython-sdk","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/modelcontextprotocol%2Fpython-sdk/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/modelcontextprotocol%2Fpython-sdk/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/modelcontextprotocol%2Fpython-sdk/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/modelcontextprotocol","download_url":"https://codeload.github.com/modelcontextprotocol/python-sdk/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/modelcontextprotocol%2Fpython-sdk/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29360277,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-02-12T01:03:07.613Z","status":"online","status_checked_at":"2026-02-12T02:00:06.911Z","response_time":55,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2026-02-01T08:00:25.455Z","updated_at":"2026-02-12T06:01:33.316Z","avatar_url":"https://github.com/modelcontextprotocol.png","language":"Python","readme":"# MCP Python SDK\n\n\u003cdiv align=\"center\"\u003e\n\n\u003cstrong\u003ePython implementation of the Model Context Protocol (MCP)\u003c/strong\u003e\n\n[![PyPI][pypi-badge]][pypi-url]\n[![MIT licensed][mit-badge]][mit-url]\n[![Python Version][python-badge]][python-url]\n[![Documentation][docs-badge]][docs-url]\n[![Protocol][protocol-badge]][protocol-url]\n[![Specification][spec-badge]][spec-url]\n\n\u003c/div\u003e\n\n\u003e [!IMPORTANT]\n\u003e **This is the `main` branch which contains v2 of the SDK (currently in development, pre-alpha).**\n\u003e\n\u003e We anticipate a stable v2 release in Q1 2026. Until then, **v1.x remains the recommended version** for production use. v1.x will continue to receive bug fixes and security updates for at least 6 months after v2 ships to give people time to upgrade.\n\u003e\n\u003e For v1 documentation and code, see the [`v1.x` branch](https://github.com/modelcontextprotocol/python-sdk/tree/v1.x).\n\n\u003c!-- omit in toc --\u003e\n## Table of Contents\n\n- [MCP Python SDK](#mcp-python-sdk)\n  - [Overview](#overview)\n  - [Installation](#installation)\n    - [Adding MCP to your python project](#adding-mcp-to-your-python-project)\n    - [Running the standalone MCP development tools](#running-the-standalone-mcp-development-tools)\n  - [Quickstart](#quickstart)\n  - [What is MCP?](#what-is-mcp)\n  - [Core Concepts](#core-concepts)\n    - [Server](#server)\n    - [Resources](#resources)\n    - [Tools](#tools)\n      - [Structured Output](#structured-output)\n    - [Prompts](#prompts)\n    - [Images](#images)\n    - [Context](#context)\n      - [Getting Context in Functions](#getting-context-in-functions)\n      - [Context Properties and Methods](#context-properties-and-methods)\n    - [Completions](#completions)\n    - [Elicitation](#elicitation)\n    - [Sampling](#sampling)\n    - [Logging and Notifications](#logging-and-notifications)\n    - [Authentication](#authentication)\n    - [MCPServer Properties](#mcpserver-properties)\n    - [Session Properties and Methods](#session-properties-and-methods)\n    - [Request Context Properties](#request-context-properties)\n  - [Running Your Server](#running-your-server)\n    - [Development Mode](#development-mode)\n    - [Claude Desktop Integration](#claude-desktop-integration)\n    - [Direct Execution](#direct-execution)\n    - [Streamable HTTP Transport](#streamable-http-transport)\n      - [CORS Configuration for Browser-Based Clients](#cors-configuration-for-browser-based-clients)\n    - [Mounting to an Existing ASGI Server](#mounting-to-an-existing-asgi-server)\n      - [StreamableHTTP servers](#streamablehttp-servers)\n        - [Basic mounting](#basic-mounting)\n        - [Host-based routing](#host-based-routing)\n        - [Multiple servers with path configuration](#multiple-servers-with-path-configuration)\n        - [Path configuration at initialization](#path-configuration-at-initialization)\n      - [SSE servers](#sse-servers)\n  - [Advanced Usage](#advanced-usage)\n    - [Low-Level Server](#low-level-server)\n      - [Structured Output Support](#structured-output-support)\n    - [Pagination (Advanced)](#pagination-advanced)\n    - [Writing MCP Clients](#writing-mcp-clients)\n    - [Client Display Utilities](#client-display-utilities)\n    - [OAuth Authentication for Clients](#oauth-authentication-for-clients)\n    - [Parsing Tool Results](#parsing-tool-results)\n    - [MCP Primitives](#mcp-primitives)\n    - [Server Capabilities](#server-capabilities)\n  - [Documentation](#documentation)\n  - [Contributing](#contributing)\n  - [License](#license)\n\n[pypi-badge]: https://img.shields.io/pypi/v/mcp.svg\n[pypi-url]: https://pypi.org/project/mcp/\n[mit-badge]: https://img.shields.io/pypi/l/mcp.svg\n[mit-url]: https://github.com/modelcontextprotocol/python-sdk/blob/main/LICENSE\n[python-badge]: https://img.shields.io/pypi/pyversions/mcp.svg\n[python-url]: https://www.python.org/downloads/\n[docs-badge]: https://img.shields.io/badge/docs-python--sdk-blue.svg\n[docs-url]: https://modelcontextprotocol.github.io/python-sdk/\n[protocol-badge]: https://img.shields.io/badge/protocol-modelcontextprotocol.io-blue.svg\n[protocol-url]: https://modelcontextprotocol.io\n[spec-badge]: https://img.shields.io/badge/spec-spec.modelcontextprotocol.io-blue.svg\n[spec-url]: https://modelcontextprotocol.io/specification/latest\n\n## Overview\n\nThe Model Context Protocol allows applications to provide context for LLMs in a standardized way, separating the concerns of providing context from the actual LLM interaction. This Python SDK implements the full MCP specification, making it easy to:\n\n- Build MCP clients that can connect to any MCP server\n- Create MCP servers that expose resources, prompts and tools\n- Use standard transports like stdio, SSE, and Streamable HTTP\n- Handle all MCP protocol messages and lifecycle events\n\n## Installation\n\n### Adding MCP to your python project\n\nWe recommend using [uv](https://docs.astral.sh/uv/) to manage your Python projects.\n\nIf you haven't created a uv-managed project yet, create one:\n\n   ```bash\n   uv init mcp-server-demo\n   cd mcp-server-demo\n   ```\n\n   Then add MCP to your project dependencies:\n\n   ```bash\n   uv add \"mcp[cli]\"\n   ```\n\nAlternatively, for projects using pip for dependencies:\n\n```bash\npip install \"mcp[cli]\"\n```\n\n### Running the standalone MCP development tools\n\nTo run the mcp command with uv:\n\n```bash\nuv run mcp\n```\n\n## Quickstart\n\nLet's create a simple MCP server that exposes a calculator tool and some data:\n\n\u003c!-- snippet-source examples/snippets/servers/mcpserver_quickstart.py --\u003e\n```python\n\"\"\"MCPServer quickstart example.\n\nRun from the repository root:\n    uv run examples/snippets/servers/mcpserver_quickstart.py\n\"\"\"\n\nfrom mcp.server.mcpserver import MCPServer\n\n# Create an MCP server\nmcp = MCPServer(\"Demo\")\n\n\n# Add an addition tool\n@mcp.tool()\ndef add(a: int, b: int) -\u003e int:\n    \"\"\"Add two numbers\"\"\"\n    return a + b\n\n\n# Add a dynamic greeting resource\n@mcp.resource(\"greeting://{name}\")\ndef get_greeting(name: str) -\u003e str:\n    \"\"\"Get a personalized greeting\"\"\"\n    return f\"Hello, {name}!\"\n\n\n# Add a prompt\n@mcp.prompt()\ndef greet_user(name: str, style: str = \"friendly\") -\u003e str:\n    \"\"\"Generate a greeting prompt\"\"\"\n    styles = {\n        \"friendly\": \"Please write a warm, friendly greeting\",\n        \"formal\": \"Please write a formal, professional greeting\",\n        \"casual\": \"Please write a casual, relaxed greeting\",\n    }\n\n    return f\"{styles.get(style, styles['friendly'])} for someone named {name}.\"\n\n\n# Run with streamable HTTP transport\nif __name__ == \"__main__\":\n    mcp.run(transport=\"streamable-http\", json_response=True)\n```\n\n_Full example: [examples/snippets/servers/mcpserver_quickstart.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/mcpserver_quickstart.py)_\n\u003c!-- /snippet-source --\u003e\n\nYou can install this server in [Claude Code](https://docs.claude.com/en/docs/claude-code/mcp) and interact with it right away. First, run the server:\n\n```bash\nuv run --with mcp examples/snippets/servers/mcpserver_quickstart.py\n```\n\nThen add it to Claude Code:\n\n```bash\nclaude mcp add --transport http my-server http://localhost:8000/mcp\n```\n\nAlternatively, you can test it with the MCP Inspector. Start the server as above, then in a separate terminal:\n\n```bash\nnpx -y @modelcontextprotocol/inspector\n```\n\nIn the inspector UI, connect to `http://localhost:8000/mcp`.\n\n## What is MCP?\n\nThe [Model Context Protocol (MCP)](https://modelcontextprotocol.io) lets you build servers that expose data and functionality to LLM applications in a secure, standardized way. Think of it like a web API, but specifically designed for LLM interactions. MCP servers can:\n\n- Expose data through **Resources** (think of these sort of like GET endpoints; they are used to load information into the LLM's context)\n- Provide functionality through **Tools** (sort of like POST endpoints; they are used to execute code or otherwise produce a side effect)\n- Define interaction patterns through **Prompts** (reusable templates for LLM interactions)\n- And more!\n\n## Core Concepts\n\n### Server\n\nThe MCPServer server is your core interface to the MCP protocol. It handles connection management, protocol compliance, and message routing:\n\n\u003c!-- snippet-source examples/snippets/servers/lifespan_example.py --\u003e\n```python\n\"\"\"Example showing lifespan support for startup/shutdown with strong typing.\"\"\"\n\nfrom collections.abc import AsyncIterator\nfrom contextlib import asynccontextmanager\nfrom dataclasses import dataclass\n\nfrom mcp.server.mcpserver import Context, MCPServer\nfrom mcp.server.session import ServerSession\n\n\n# Mock database class for example\nclass Database:\n    \"\"\"Mock database class for example.\"\"\"\n\n    @classmethod\n    async def connect(cls) -\u003e \"Database\":\n        \"\"\"Connect to database.\"\"\"\n        return cls()\n\n    async def disconnect(self) -\u003e None:\n        \"\"\"Disconnect from database.\"\"\"\n        pass\n\n    def query(self) -\u003e str:\n        \"\"\"Execute a query.\"\"\"\n        return \"Query result\"\n\n\n@dataclass\nclass AppContext:\n    \"\"\"Application context with typed dependencies.\"\"\"\n\n    db: Database\n\n\n@asynccontextmanager\nasync def app_lifespan(server: MCPServer) -\u003e AsyncIterator[AppContext]:\n    \"\"\"Manage application lifecycle with type-safe context.\"\"\"\n    # Initialize on startup\n    db = await Database.connect()\n    try:\n        yield AppContext(db=db)\n    finally:\n        # Cleanup on shutdown\n        await db.disconnect()\n\n\n# Pass lifespan to server\nmcp = MCPServer(\"My App\", lifespan=app_lifespan)\n\n\n# Access type-safe lifespan context in tools\n@mcp.tool()\ndef query_db(ctx: Context[ServerSession, AppContext]) -\u003e str:\n    \"\"\"Tool that uses initialized resources.\"\"\"\n    db = ctx.request_context.lifespan_context.db\n    return db.query()\n```\n\n_Full example: [examples/snippets/servers/lifespan_example.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/lifespan_example.py)_\n\u003c!-- /snippet-source --\u003e\n\n### Resources\n\nResources are how you expose data to LLMs. They're similar to GET endpoints in a REST API - they provide data but shouldn't perform significant computation or have side effects:\n\n\u003c!-- snippet-source examples/snippets/servers/basic_resource.py --\u003e\n```python\nfrom mcp.server.mcpserver import MCPServer\n\nmcp = MCPServer(name=\"Resource Example\")\n\n\n@mcp.resource(\"file://documents/{name}\")\ndef read_document(name: str) -\u003e str:\n    \"\"\"Read a document by name.\"\"\"\n    # This would normally read from disk\n    return f\"Content of {name}\"\n\n\n@mcp.resource(\"config://settings\")\ndef get_settings() -\u003e str:\n    \"\"\"Get application settings.\"\"\"\n    return \"\"\"{\n  \"theme\": \"dark\",\n  \"language\": \"en\",\n  \"debug\": false\n}\"\"\"\n```\n\n_Full example: [examples/snippets/servers/basic_resource.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/basic_resource.py)_\n\u003c!-- /snippet-source --\u003e\n\n### Tools\n\nTools let LLMs take actions through your server. Unlike resources, tools are expected to perform computation and have side effects:\n\n\u003c!-- snippet-source examples/snippets/servers/basic_tool.py --\u003e\n```python\nfrom mcp.server.mcpserver import MCPServer\n\nmcp = MCPServer(name=\"Tool Example\")\n\n\n@mcp.tool()\ndef sum(a: int, b: int) -\u003e int:\n    \"\"\"Add two numbers together.\"\"\"\n    return a + b\n\n\n@mcp.tool()\ndef get_weather(city: str, unit: str = \"celsius\") -\u003e str:\n    \"\"\"Get weather for a city.\"\"\"\n    # This would normally call a weather API\n    return f\"Weather in {city}: 22degrees{unit[0].upper()}\"\n```\n\n_Full example: [examples/snippets/servers/basic_tool.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/basic_tool.py)_\n\u003c!-- /snippet-source --\u003e\n\nTools can optionally receive a Context object by including a parameter with the `Context` type annotation. This context is automatically injected by the MCPServer framework and provides access to MCP capabilities:\n\n\u003c!-- snippet-source examples/snippets/servers/tool_progress.py --\u003e\n```python\nfrom mcp.server.mcpserver import Context, MCPServer\nfrom mcp.server.session import ServerSession\n\nmcp = MCPServer(name=\"Progress Example\")\n\n\n@mcp.tool()\nasync def long_running_task(task_name: str, ctx: Context[ServerSession, None], steps: int = 5) -\u003e str:\n    \"\"\"Execute a task with progress updates.\"\"\"\n    await ctx.info(f\"Starting: {task_name}\")\n\n    for i in range(steps):\n        progress = (i + 1) / steps\n        await ctx.report_progress(\n            progress=progress,\n            total=1.0,\n            message=f\"Step {i + 1}/{steps}\",\n        )\n        await ctx.debug(f\"Completed step {i + 1}\")\n\n    return f\"Task '{task_name}' completed\"\n```\n\n_Full example: [examples/snippets/servers/tool_progress.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/tool_progress.py)_\n\u003c!-- /snippet-source --\u003e\n\n#### Structured Output\n\nTools will return structured results by default, if their return type\nannotation is compatible. Otherwise, they will return unstructured results.\n\nStructured output supports these return types:\n\n- Pydantic models (BaseModel subclasses)\n- TypedDicts\n- Dataclasses and other classes with type hints\n- `dict[str, T]` (where T is any JSON-serializable type)\n- Primitive types (str, int, float, bool, bytes, None) - wrapped in `{\"result\": value}`\n- Generic types (list, tuple, Union, Optional, etc.) - wrapped in `{\"result\": value}`\n\nClasses without type hints cannot be serialized for structured output. Only\nclasses with properly annotated attributes will be converted to Pydantic models\nfor schema generation and validation.\n\nStructured results are automatically validated against the output schema\ngenerated from the annotation. This ensures the tool returns well-typed,\nvalidated data that clients can easily process.\n\n**Note:** For backward compatibility, unstructured results are also\nreturned. Unstructured results are provided for backward compatibility\nwith previous versions of the MCP specification, and are quirks-compatible\nwith previous versions of MCPServer in the current version of the SDK.\n\n**Note:** In cases where a tool function's return type annotation\ncauses the tool to be classified as structured _and this is undesirable_,\nthe  classification can be suppressed by passing `structured_output=False`\nto the `@tool` decorator.\n\n##### Advanced: Direct CallToolResult\n\nFor full control over tool responses including the `_meta` field (for passing data to client applications without exposing it to the model), you can return `CallToolResult` directly:\n\n\u003c!-- snippet-source examples/snippets/servers/direct_call_tool_result.py --\u003e\n```python\n\"\"\"Example showing direct CallToolResult return for advanced control.\"\"\"\n\nfrom typing import Annotated\n\nfrom pydantic import BaseModel\n\nfrom mcp.server.mcpserver import MCPServer\nfrom mcp.types import CallToolResult, TextContent\n\nmcp = MCPServer(\"CallToolResult Example\")\n\n\nclass ValidationModel(BaseModel):\n    \"\"\"Model for validating structured output.\"\"\"\n\n    status: str\n    data: dict[str, int]\n\n\n@mcp.tool()\ndef advanced_tool() -\u003e CallToolResult:\n    \"\"\"Return CallToolResult directly for full control including _meta field.\"\"\"\n    return CallToolResult(\n        content=[TextContent(type=\"text\", text=\"Response visible to the model\")],\n        _meta={\"hidden\": \"data for client applications only\"},\n    )\n\n\n@mcp.tool()\ndef validated_tool() -\u003e Annotated[CallToolResult, ValidationModel]:\n    \"\"\"Return CallToolResult with structured output validation.\"\"\"\n    return CallToolResult(\n        content=[TextContent(type=\"text\", text=\"Validated response\")],\n        structured_content={\"status\": \"success\", \"data\": {\"result\": 42}},\n        _meta={\"internal\": \"metadata\"},\n    )\n\n\n@mcp.tool()\ndef empty_result_tool() -\u003e CallToolResult:\n    \"\"\"For empty results, return CallToolResult with empty content.\"\"\"\n    return CallToolResult(content=[])\n```\n\n_Full example: [examples/snippets/servers/direct_call_tool_result.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/direct_call_tool_result.py)_\n\u003c!-- /snippet-source --\u003e\n\n**Important:** `CallToolResult` must always be returned (no `Optional` or `Union`). For empty results, use `CallToolResult(content=[])`. For optional simple types, use `str | None` without `CallToolResult`.\n\n\u003c!-- snippet-source examples/snippets/servers/structured_output.py --\u003e\n```python\n\"\"\"Example showing structured output with tools.\"\"\"\n\nfrom typing import TypedDict\n\nfrom pydantic import BaseModel, Field\n\nfrom mcp.server.mcpserver import MCPServer\n\nmcp = MCPServer(\"Structured Output Example\")\n\n\n# Using Pydantic models for rich structured data\nclass WeatherData(BaseModel):\n    \"\"\"Weather information structure.\"\"\"\n\n    temperature: float = Field(description=\"Temperature in Celsius\")\n    humidity: float = Field(description=\"Humidity percentage\")\n    condition: str\n    wind_speed: float\n\n\n@mcp.tool()\ndef get_weather(city: str) -\u003e WeatherData:\n    \"\"\"Get weather for a city - returns structured data.\"\"\"\n    # Simulated weather data\n    return WeatherData(\n        temperature=22.5,\n        humidity=45.0,\n        condition=\"sunny\",\n        wind_speed=5.2,\n    )\n\n\n# Using TypedDict for simpler structures\nclass LocationInfo(TypedDict):\n    latitude: float\n    longitude: float\n    name: str\n\n\n@mcp.tool()\ndef get_location(address: str) -\u003e LocationInfo:\n    \"\"\"Get location coordinates\"\"\"\n    return LocationInfo(latitude=51.5074, longitude=-0.1278, name=\"London, UK\")\n\n\n# Using dict[str, Any] for flexible schemas\n@mcp.tool()\ndef get_statistics(data_type: str) -\u003e dict[str, float]:\n    \"\"\"Get various statistics\"\"\"\n    return {\"mean\": 42.5, \"median\": 40.0, \"std_dev\": 5.2}\n\n\n# Ordinary classes with type hints work for structured output\nclass UserProfile:\n    name: str\n    age: int\n    email: str | None = None\n\n    def __init__(self, name: str, age: int, email: str | None = None):\n        self.name = name\n        self.age = age\n        self.email = email\n\n\n@mcp.tool()\ndef get_user(user_id: str) -\u003e UserProfile:\n    \"\"\"Get user profile - returns structured data\"\"\"\n    return UserProfile(name=\"Alice\", age=30, email=\"alice@example.com\")\n\n\n# Classes WITHOUT type hints cannot be used for structured output\nclass UntypedConfig:\n    def __init__(self, setting1, setting2):  # type: ignore[reportMissingParameterType]\n        self.setting1 = setting1\n        self.setting2 = setting2\n\n\n@mcp.tool()\ndef get_config() -\u003e UntypedConfig:\n    \"\"\"This returns unstructured output - no schema generated\"\"\"\n    return UntypedConfig(\"value1\", \"value2\")\n\n\n# Lists and other types are wrapped automatically\n@mcp.tool()\ndef list_cities() -\u003e list[str]:\n    \"\"\"Get a list of cities\"\"\"\n    return [\"London\", \"Paris\", \"Tokyo\"]\n    # Returns: {\"result\": [\"London\", \"Paris\", \"Tokyo\"]}\n\n\n@mcp.tool()\ndef get_temperature(city: str) -\u003e float:\n    \"\"\"Get temperature as a simple float\"\"\"\n    return 22.5\n    # Returns: {\"result\": 22.5}\n```\n\n_Full example: [examples/snippets/servers/structured_output.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/structured_output.py)_\n\u003c!-- /snippet-source --\u003e\n\n### Prompts\n\nPrompts are reusable templates that help LLMs interact with your server effectively:\n\n\u003c!-- snippet-source examples/snippets/servers/basic_prompt.py --\u003e\n```python\nfrom mcp.server.mcpserver import MCPServer\nfrom mcp.server.mcpserver.prompts import base\n\nmcp = MCPServer(name=\"Prompt Example\")\n\n\n@mcp.prompt(title=\"Code Review\")\ndef review_code(code: str) -\u003e str:\n    return f\"Please review this code:\\n\\n{code}\"\n\n\n@mcp.prompt(title=\"Debug Assistant\")\ndef debug_error(error: str) -\u003e list[base.Message]:\n    return [\n        base.UserMessage(\"I'm seeing this error:\"),\n        base.UserMessage(error),\n        base.AssistantMessage(\"I'll help debug that. What have you tried so far?\"),\n    ]\n```\n\n_Full example: [examples/snippets/servers/basic_prompt.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/basic_prompt.py)_\n\u003c!-- /snippet-source --\u003e\n\n### Icons\n\nMCP servers can provide icons for UI display. Icons can be added to the server implementation, tools, resources, and prompts:\n\n```python\nfrom mcp.server.mcpserver import MCPServer, Icon\n\n# Create an icon from a file path or URL\nicon = Icon(\n    src=\"icon.png\",\n    mimeType=\"image/png\",\n    sizes=\"64x64\"\n)\n\n# Add icons to server\nmcp = MCPServer(\n    \"My Server\",\n    website_url=\"https://example.com\",\n    icons=[icon]\n)\n\n# Add icons to tools, resources, and prompts\n@mcp.tool(icons=[icon])\ndef my_tool():\n    \"\"\"Tool with an icon.\"\"\"\n    return \"result\"\n\n@mcp.resource(\"demo://resource\", icons=[icon])\ndef my_resource():\n    \"\"\"Resource with an icon.\"\"\"\n    return \"content\"\n```\n\n_Full example: [examples/mcpserver/icons_demo.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/mcpserver/icons_demo.py)_\n\n### Images\n\nMCPServer provides an `Image` class that automatically handles image data:\n\n\u003c!-- snippet-source examples/snippets/servers/images.py --\u003e\n```python\n\"\"\"Example showing image handling with MCPServer.\"\"\"\n\nfrom PIL import Image as PILImage\n\nfrom mcp.server.mcpserver import Image, MCPServer\n\nmcp = MCPServer(\"Image Example\")\n\n\n@mcp.tool()\ndef create_thumbnail(image_path: str) -\u003e Image:\n    \"\"\"Create a thumbnail from an image\"\"\"\n    img = PILImage.open(image_path)\n    img.thumbnail((100, 100))\n    return Image(data=img.tobytes(), format=\"png\")\n```\n\n_Full example: [examples/snippets/servers/images.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/images.py)_\n\u003c!-- /snippet-source --\u003e\n\n### Context\n\nThe Context object is automatically injected into tool and resource functions that request it via type hints. It provides access to MCP capabilities like logging, progress reporting, resource reading, user interaction, and request metadata.\n\n#### Getting Context in Functions\n\nTo use context in a tool or resource function, add a parameter with the `Context` type annotation:\n\n```python\nfrom mcp.server.mcpserver import Context, MCPServer\n\nmcp = MCPServer(name=\"Context Example\")\n\n\n@mcp.tool()\nasync def my_tool(x: int, ctx: Context) -\u003e str:\n    \"\"\"Tool that uses context capabilities.\"\"\"\n    # The context parameter can have any name as long as it's type-annotated\n    return await process_with_context(x, ctx)\n```\n\n#### Context Properties and Methods\n\nThe Context object provides the following capabilities:\n\n- `ctx.request_id` - Unique ID for the current request\n- `ctx.client_id` - Client ID if available\n- `ctx.mcp_server` - Access to the MCPServer server instance (see [MCPServer Properties](#mcpserver-properties))\n- `ctx.session` - Access to the underlying session for advanced communication (see [Session Properties and Methods](#session-properties-and-methods))\n- `ctx.request_context` - Access to request-specific data and lifespan resources (see [Request Context Properties](#request-context-properties))\n- `await ctx.debug(message)` - Send debug log message\n- `await ctx.info(message)` - Send info log message\n- `await ctx.warning(message)` - Send warning log message\n- `await ctx.error(message)` - Send error log message\n- `await ctx.log(level, message, logger_name=None)` - Send log with custom level\n- `await ctx.report_progress(progress, total=None, message=None)` - Report operation progress\n- `await ctx.read_resource(uri)` - Read a resource by URI\n- `await ctx.elicit(message, schema)` - Request additional information from user with validation\n\n\u003c!-- snippet-source examples/snippets/servers/tool_progress.py --\u003e\n```python\nfrom mcp.server.mcpserver import Context, MCPServer\nfrom mcp.server.session import ServerSession\n\nmcp = MCPServer(name=\"Progress Example\")\n\n\n@mcp.tool()\nasync def long_running_task(task_name: str, ctx: Context[ServerSession, None], steps: int = 5) -\u003e str:\n    \"\"\"Execute a task with progress updates.\"\"\"\n    await ctx.info(f\"Starting: {task_name}\")\n\n    for i in range(steps):\n        progress = (i + 1) / steps\n        await ctx.report_progress(\n            progress=progress,\n            total=1.0,\n            message=f\"Step {i + 1}/{steps}\",\n        )\n        await ctx.debug(f\"Completed step {i + 1}\")\n\n    return f\"Task '{task_name}' completed\"\n```\n\n_Full example: [examples/snippets/servers/tool_progress.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/tool_progress.py)_\n\u003c!-- /snippet-source --\u003e\n\n### Completions\n\nMCP supports providing completion suggestions for prompt arguments and resource template parameters. With the context parameter, servers can provide completions based on previously resolved values:\n\nClient usage:\n\n\u003c!-- snippet-source examples/snippets/clients/completion_client.py --\u003e\n```python\n\"\"\"cd to the `examples/snippets` directory and run:\nuv run completion-client\n\"\"\"\n\nimport asyncio\nimport os\n\nfrom mcp import ClientSession, StdioServerParameters\nfrom mcp.client.stdio import stdio_client\nfrom mcp.types import PromptReference, ResourceTemplateReference\n\n# Create server parameters for stdio connection\nserver_params = StdioServerParameters(\n    command=\"uv\",  # Using uv to run the server\n    args=[\"run\", \"server\", \"completion\", \"stdio\"],  # Server with completion support\n    env={\"UV_INDEX\": os.environ.get(\"UV_INDEX\", \"\")},\n)\n\n\nasync def run():\n    \"\"\"Run the completion client example.\"\"\"\n    async with stdio_client(server_params) as (read, write):\n        async with ClientSession(read, write) as session:\n            # Initialize the connection\n            await session.initialize()\n\n            # List available resource templates\n            templates = await session.list_resource_templates()\n            print(\"Available resource templates:\")\n            for template in templates.resource_templates:\n                print(f\"  - {template.uri_template}\")\n\n            # List available prompts\n            prompts = await session.list_prompts()\n            print(\"\\nAvailable prompts:\")\n            for prompt in prompts.prompts:\n                print(f\"  - {prompt.name}\")\n\n            # Complete resource template arguments\n            if templates.resource_templates:\n                template = templates.resource_templates[0]\n                print(f\"\\nCompleting arguments for resource template: {template.uri_template}\")\n\n                # Complete without context\n                result = await session.complete(\n                    ref=ResourceTemplateReference(type=\"ref/resource\", uri=template.uri_template),\n                    argument={\"name\": \"owner\", \"value\": \"model\"},\n                )\n                print(f\"Completions for 'owner' starting with 'model': {result.completion.values}\")\n\n                # Complete with context - repo suggestions based on owner\n                result = await session.complete(\n                    ref=ResourceTemplateReference(type=\"ref/resource\", uri=template.uri_template),\n                    argument={\"name\": \"repo\", \"value\": \"\"},\n                    context_arguments={\"owner\": \"modelcontextprotocol\"},\n                )\n                print(f\"Completions for 'repo' with owner='modelcontextprotocol': {result.completion.values}\")\n\n            # Complete prompt arguments\n            if prompts.prompts:\n                prompt_name = prompts.prompts[0].name\n                print(f\"\\nCompleting arguments for prompt: {prompt_name}\")\n\n                result = await session.complete(\n                    ref=PromptReference(type=\"ref/prompt\", name=prompt_name),\n                    argument={\"name\": \"style\", \"value\": \"\"},\n                )\n                print(f\"Completions for 'style' argument: {result.completion.values}\")\n\n\ndef main():\n    \"\"\"Entry point for the completion client.\"\"\"\n    asyncio.run(run())\n\n\nif __name__ == \"__main__\":\n    main()\n```\n\n_Full example: [examples/snippets/clients/completion_client.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/clients/completion_client.py)_\n\u003c!-- /snippet-source --\u003e\n### Elicitation\n\nRequest additional information from users. This example shows an Elicitation during a Tool Call:\n\n\u003c!-- snippet-source examples/snippets/servers/elicitation.py --\u003e\n```python\n\"\"\"Elicitation examples demonstrating form and URL mode elicitation.\n\nForm mode elicitation collects structured, non-sensitive data through a schema.\nURL mode elicitation directs users to external URLs for sensitive operations\nlike OAuth flows, credential collection, or payment processing.\n\"\"\"\n\nimport uuid\n\nfrom pydantic import BaseModel, Field\n\nfrom mcp.server.mcpserver import Context, MCPServer\nfrom mcp.server.session import ServerSession\nfrom mcp.shared.exceptions import UrlElicitationRequiredError\nfrom mcp.types import ElicitRequestURLParams\n\nmcp = MCPServer(name=\"Elicitation Example\")\n\n\nclass BookingPreferences(BaseModel):\n    \"\"\"Schema for collecting user preferences.\"\"\"\n\n    checkAlternative: bool = Field(description=\"Would you like to check another date?\")\n    alternativeDate: str = Field(\n        default=\"2024-12-26\",\n        description=\"Alternative date (YYYY-MM-DD)\",\n    )\n\n\n@mcp.tool()\nasync def book_table(date: str, time: str, party_size: int, ctx: Context[ServerSession, None]) -\u003e str:\n    \"\"\"Book a table with date availability check.\n\n    This demonstrates form mode elicitation for collecting non-sensitive user input.\n    \"\"\"\n    # Check if date is available\n    if date == \"2024-12-25\":\n        # Date unavailable - ask user for alternative\n        result = await ctx.elicit(\n            message=(f\"No tables available for {party_size} on {date}. Would you like to try another date?\"),\n            schema=BookingPreferences,\n        )\n\n        if result.action == \"accept\" and result.data:\n            if result.data.checkAlternative:\n                return f\"[SUCCESS] Booked for {result.data.alternativeDate}\"\n            return \"[CANCELLED] No booking made\"\n        return \"[CANCELLED] Booking cancelled\"\n\n    # Date available\n    return f\"[SUCCESS] Booked for {date} at {time}\"\n\n\n@mcp.tool()\nasync def secure_payment(amount: float, ctx: Context[ServerSession, None]) -\u003e str:\n    \"\"\"Process a secure payment requiring URL confirmation.\n\n    This demonstrates URL mode elicitation using ctx.elicit_url() for\n    operations that require out-of-band user interaction.\n    \"\"\"\n    elicitation_id = str(uuid.uuid4())\n\n    result = await ctx.elicit_url(\n        message=f\"Please confirm payment of ${amount:.2f}\",\n        url=f\"https://payments.example.com/confirm?amount={amount}\u0026id={elicitation_id}\",\n        elicitation_id=elicitation_id,\n    )\n\n    if result.action == \"accept\":\n        # In a real app, the payment confirmation would happen out-of-band\n        # and you'd verify the payment status from your backend\n        return f\"Payment of ${amount:.2f} initiated - check your browser to complete\"\n    elif result.action == \"decline\":\n        return \"Payment declined by user\"\n    return \"Payment cancelled\"\n\n\n@mcp.tool()\nasync def connect_service(service_name: str, ctx: Context[ServerSession, None]) -\u003e str:\n    \"\"\"Connect to a third-party service requiring OAuth authorization.\n\n    This demonstrates the \"throw error\" pattern using UrlElicitationRequiredError.\n    Use this pattern when the tool cannot proceed without user authorization.\n    \"\"\"\n    elicitation_id = str(uuid.uuid4())\n\n    # Raise UrlElicitationRequiredError to signal that the client must complete\n    # a URL elicitation before this request can be processed.\n    # The MCP framework will convert this to a -32042 error response.\n    raise UrlElicitationRequiredError(\n        [\n            ElicitRequestURLParams(\n                mode=\"url\",\n                message=f\"Authorization required to connect to {service_name}\",\n                url=f\"https://{service_name}.example.com/oauth/authorize?elicit={elicitation_id}\",\n                elicitation_id=elicitation_id,\n            )\n        ]\n    )\n```\n\n_Full example: [examples/snippets/servers/elicitation.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/elicitation.py)_\n\u003c!-- /snippet-source --\u003e\n\nElicitation schemas support default values for all field types. Default values are automatically included in the JSON schema sent to clients, allowing them to pre-populate forms.\n\nThe `elicit()` method returns an `ElicitationResult` with:\n\n- `action`: \"accept\", \"decline\", or \"cancel\"\n- `data`: The validated response (only when accepted)\n- `validation_error`: Any validation error message\n\n### Sampling\n\nTools can interact with LLMs through sampling (generating text):\n\n\u003c!-- snippet-source examples/snippets/servers/sampling.py --\u003e\n```python\nfrom mcp.server.mcpserver import Context, MCPServer\nfrom mcp.server.session import ServerSession\nfrom mcp.types import SamplingMessage, TextContent\n\nmcp = MCPServer(name=\"Sampling Example\")\n\n\n@mcp.tool()\nasync def generate_poem(topic: str, ctx: Context[ServerSession, None]) -\u003e str:\n    \"\"\"Generate a poem using LLM sampling.\"\"\"\n    prompt = f\"Write a short poem about {topic}\"\n\n    result = await ctx.session.create_message(\n        messages=[\n            SamplingMessage(\n                role=\"user\",\n                content=TextContent(type=\"text\", text=prompt),\n            )\n        ],\n        max_tokens=100,\n    )\n\n    # Since we're not passing tools param, result.content is single content\n    if result.content.type == \"text\":\n        return result.content.text\n    return str(result.content)\n```\n\n_Full example: [examples/snippets/servers/sampling.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/sampling.py)_\n\u003c!-- /snippet-source --\u003e\n\n### Logging and Notifications\n\nTools can send logs and notifications through the context:\n\n\u003c!-- snippet-source examples/snippets/servers/notifications.py --\u003e\n```python\nfrom mcp.server.mcpserver import Context, MCPServer\nfrom mcp.server.session import ServerSession\n\nmcp = MCPServer(name=\"Notifications Example\")\n\n\n@mcp.tool()\nasync def process_data(data: str, ctx: Context[ServerSession, None]) -\u003e str:\n    \"\"\"Process data with logging.\"\"\"\n    # Different log levels\n    await ctx.debug(f\"Debug: Processing '{data}'\")\n    await ctx.info(\"Info: Starting processing\")\n    await ctx.warning(\"Warning: This is experimental\")\n    await ctx.error(\"Error: (This is just a demo)\")\n\n    # Notify about resource changes\n    await ctx.session.send_resource_list_changed()\n\n    return f\"Processed: {data}\"\n```\n\n_Full example: [examples/snippets/servers/notifications.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/notifications.py)_\n\u003c!-- /snippet-source --\u003e\n\n### Authentication\n\nAuthentication can be used by servers that want to expose tools accessing protected resources.\n\n`mcp.server.auth` implements OAuth 2.1 resource server functionality, where MCP servers act as Resource Servers (RS) that validate tokens issued by separate Authorization Servers (AS). This follows the [MCP authorization specification](https://modelcontextprotocol.io/specification/2025-06-18/basic/authorization) and implements RFC 9728 (Protected Resource Metadata) for AS discovery.\n\nMCP servers can use authentication by providing an implementation of the `TokenVerifier` protocol:\n\n\u003c!-- snippet-source examples/snippets/servers/oauth_server.py --\u003e\n```python\n\"\"\"Run from the repository root:\nuv run examples/snippets/servers/oauth_server.py\n\"\"\"\n\nfrom pydantic import AnyHttpUrl\n\nfrom mcp.server.auth.provider import AccessToken, TokenVerifier\nfrom mcp.server.auth.settings import AuthSettings\nfrom mcp.server.mcpserver import MCPServer\n\n\nclass SimpleTokenVerifier(TokenVerifier):\n    \"\"\"Simple token verifier for demonstration.\"\"\"\n\n    async def verify_token(self, token: str) -\u003e AccessToken | None:\n        pass  # This is where you would implement actual token validation\n\n\n# Create MCPServer instance as a Resource Server\nmcp = MCPServer(\n    \"Weather Service\",\n    # Token verifier for authentication\n    token_verifier=SimpleTokenVerifier(),\n    # Auth settings for RFC 9728 Protected Resource Metadata\n    auth=AuthSettings(\n        issuer_url=AnyHttpUrl(\"https://auth.example.com\"),  # Authorization Server URL\n        resource_server_url=AnyHttpUrl(\"http://localhost:3001\"),  # This server's URL\n        required_scopes=[\"user\"],\n    ),\n)\n\n\n@mcp.tool()\nasync def get_weather(city: str = \"London\") -\u003e dict[str, str]:\n    \"\"\"Get weather data for a city\"\"\"\n    return {\n        \"city\": city,\n        \"temperature\": \"22\",\n        \"condition\": \"Partly cloudy\",\n        \"humidity\": \"65%\",\n    }\n\n\nif __name__ == \"__main__\":\n    mcp.run(transport=\"streamable-http\", json_response=True)\n```\n\n_Full example: [examples/snippets/servers/oauth_server.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/oauth_server.py)_\n\u003c!-- /snippet-source --\u003e\n\nFor a complete example with separate Authorization Server and Resource Server implementations, see [`examples/servers/simple-auth/`](examples/servers/simple-auth/).\n\n**Architecture:**\n\n- **Authorization Server (AS)**: Handles OAuth flows, user authentication, and token issuance\n- **Resource Server (RS)**: Your MCP server that validates tokens and serves protected resources\n- **Client**: Discovers AS through RFC 9728, obtains tokens, and uses them with the MCP server\n\nSee [TokenVerifier](src/mcp/server/auth/provider.py) for more details on implementing token validation.\n\n### MCPServer Properties\n\nThe MCPServer server instance accessible via `ctx.mcp_server` provides access to server configuration and metadata:\n\n- `ctx.mcp_server.name` - The server's name as defined during initialization\n- `ctx.mcp_server.instructions` - Server instructions/description provided to clients\n- `ctx.mcp_server.website_url` - Optional website URL for the server\n- `ctx.mcp_server.icons` - Optional list of icons for UI display\n- `ctx.mcp_server.settings` - Complete server configuration object containing:\n  - `debug` - Debug mode flag\n  - `log_level` - Current logging level\n  - `host` and `port` - Server network configuration\n  - `sse_path`, `streamable_http_path` - Transport paths\n  - `stateless_http` - Whether the server operates in stateless mode\n  - And other configuration options\n\n```python\n@mcp.tool()\ndef server_info(ctx: Context) -\u003e dict:\n    \"\"\"Get information about the current server.\"\"\"\n    return {\n        \"name\": ctx.mcp_server.name,\n        \"instructions\": ctx.mcp_server.instructions,\n        \"debug_mode\": ctx.mcp_server.settings.debug,\n        \"log_level\": ctx.mcp_server.settings.log_level,\n        \"host\": ctx.mcp_server.settings.host,\n        \"port\": ctx.mcp_server.settings.port,\n    }\n```\n\n### Session Properties and Methods\n\nThe session object accessible via `ctx.session` provides advanced control over client communication:\n\n- `ctx.session.client_params` - Client initialization parameters and declared capabilities\n- `await ctx.session.send_log_message(level, data, logger)` - Send log messages with full control\n- `await ctx.session.create_message(messages, max_tokens)` - Request LLM sampling/completion\n- `await ctx.session.send_progress_notification(token, progress, total, message)` - Direct progress updates\n- `await ctx.session.send_resource_updated(uri)` - Notify clients that a specific resource changed\n- `await ctx.session.send_resource_list_changed()` - Notify clients that the resource list changed\n- `await ctx.session.send_tool_list_changed()` - Notify clients that the tool list changed\n- `await ctx.session.send_prompt_list_changed()` - Notify clients that the prompt list changed\n\n```python\n@mcp.tool()\nasync def notify_data_update(resource_uri: str, ctx: Context) -\u003e str:\n    \"\"\"Update data and notify clients of the change.\"\"\"\n    # Perform data update logic here\n\n    # Notify clients that this specific resource changed\n    await ctx.session.send_resource_updated(AnyUrl(resource_uri))\n\n    # If this affects the overall resource list, notify about that too\n    await ctx.session.send_resource_list_changed()\n\n    return f\"Updated {resource_uri} and notified clients\"\n```\n\n### Request Context Properties\n\nThe request context accessible via `ctx.request_context` contains request-specific information and resources:\n\n- `ctx.request_context.lifespan_context` - Access to resources initialized during server startup\n  - Database connections, configuration objects, shared services\n  - Type-safe access to resources defined in your server's lifespan function\n- `ctx.request_context.meta` - Request metadata from the client including:\n  - `progressToken` - Token for progress notifications\n  - Other client-provided metadata\n- `ctx.request_context.request` - The original MCP request object for advanced processing\n- `ctx.request_context.request_id` - Unique identifier for this request\n\n```python\n# Example with typed lifespan context\n@dataclass\nclass AppContext:\n    db: Database\n    config: AppConfig\n\n@mcp.tool()\ndef query_with_config(query: str, ctx: Context) -\u003e str:\n    \"\"\"Execute a query using shared database and configuration.\"\"\"\n    # Access typed lifespan context\n    app_ctx: AppContext = ctx.request_context.lifespan_context\n\n    # Use shared resources\n    connection = app_ctx.db\n    settings = app_ctx.config\n\n    # Execute query with configuration\n    result = connection.execute(query, timeout=settings.query_timeout)\n    return str(result)\n```\n\n_Full lifespan example: [examples/snippets/servers/lifespan_example.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/lifespan_example.py)_\n\n## Running Your Server\n\n### Development Mode\n\nThe fastest way to test and debug your server is with the MCP Inspector:\n\n```bash\nuv run mcp dev server.py\n\n# Add dependencies\nuv run mcp dev server.py --with pandas --with numpy\n\n# Mount local code\nuv run mcp dev server.py --with-editable .\n```\n\n### Claude Desktop Integration\n\nOnce your server is ready, install it in Claude Desktop:\n\n```bash\nuv run mcp install server.py\n\n# Custom name\nuv run mcp install server.py --name \"My Analytics Server\"\n\n# Environment variables\nuv run mcp install server.py -v API_KEY=abc123 -v DB_URL=postgres://...\nuv run mcp install server.py -f .env\n```\n\n### Direct Execution\n\nFor advanced scenarios like custom deployments:\n\n\u003c!-- snippet-source examples/snippets/servers/direct_execution.py --\u003e\n```python\n\"\"\"Example showing direct execution of an MCP server.\n\nThis is the simplest way to run an MCP server directly.\ncd to the `examples/snippets` directory and run:\n    uv run direct-execution-server\n    or\n    python servers/direct_execution.py\n\"\"\"\n\nfrom mcp.server.mcpserver import MCPServer\n\nmcp = MCPServer(\"My App\")\n\n\n@mcp.tool()\ndef hello(name: str = \"World\") -\u003e str:\n    \"\"\"Say hello to someone.\"\"\"\n    return f\"Hello, {name}!\"\n\n\ndef main():\n    \"\"\"Entry point for the direct execution server.\"\"\"\n    mcp.run()\n\n\nif __name__ == \"__main__\":\n    main()\n```\n\n_Full example: [examples/snippets/servers/direct_execution.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/direct_execution.py)_\n\u003c!-- /snippet-source --\u003e\n\nRun it with:\n\n```bash\npython servers/direct_execution.py\n# or\nuv run mcp run servers/direct_execution.py\n```\n\nNote that `uv run mcp run` or `uv run mcp dev` only supports server using MCPServer and not the low-level server variant.\n\n### Streamable HTTP Transport\n\n\u003e **Note**: Streamable HTTP transport is the recommended transport for production deployments. Use `stateless_http=True` and `json_response=True` for optimal scalability.\n\n\u003c!-- snippet-source examples/snippets/servers/streamable_config.py --\u003e\n```python\n\"\"\"Run from the repository root:\nuv run examples/snippets/servers/streamable_config.py\n\"\"\"\n\nfrom mcp.server.mcpserver import MCPServer\n\nmcp = MCPServer(\"StatelessServer\")\n\n\n# Add a simple tool to demonstrate the server\n@mcp.tool()\ndef greet(name: str = \"World\") -\u003e str:\n    \"\"\"Greet someone by name.\"\"\"\n    return f\"Hello, {name}!\"\n\n\n# Run server with streamable_http transport\n# Transport-specific options (stateless_http, json_response) are passed to run()\nif __name__ == \"__main__\":\n    # Stateless server with JSON responses (recommended)\n    mcp.run(transport=\"streamable-http\", stateless_http=True, json_response=True)\n\n    # Other configuration options:\n    # Stateless server with SSE streaming responses\n    # mcp.run(transport=\"streamable-http\", stateless_http=True)\n\n    # Stateful server with session persistence\n    # mcp.run(transport=\"streamable-http\")\n```\n\n_Full example: [examples/snippets/servers/streamable_config.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/streamable_config.py)_\n\u003c!-- /snippet-source --\u003e\n\nYou can mount multiple MCPServer servers in a Starlette application:\n\n\u003c!-- snippet-source examples/snippets/servers/streamable_starlette_mount.py --\u003e\n```python\n\"\"\"Run from the repository root:\nuvicorn examples.snippets.servers.streamable_starlette_mount:app --reload\n\"\"\"\n\nimport contextlib\n\nfrom starlette.applications import Starlette\nfrom starlette.routing import Mount\n\nfrom mcp.server.mcpserver import MCPServer\n\n# Create the Echo server\necho_mcp = MCPServer(name=\"EchoServer\")\n\n\n@echo_mcp.tool()\ndef echo(message: str) -\u003e str:\n    \"\"\"A simple echo tool\"\"\"\n    return f\"Echo: {message}\"\n\n\n# Create the Math server\nmath_mcp = MCPServer(name=\"MathServer\")\n\n\n@math_mcp.tool()\ndef add_two(n: int) -\u003e int:\n    \"\"\"Tool to add two to the input\"\"\"\n    return n + 2\n\n\n# Create a combined lifespan to manage both session managers\n@contextlib.asynccontextmanager\nasync def lifespan(app: Starlette):\n    async with contextlib.AsyncExitStack() as stack:\n        await stack.enter_async_context(echo_mcp.session_manager.run())\n        await stack.enter_async_context(math_mcp.session_manager.run())\n        yield\n\n\n# Create the Starlette app and mount the MCP servers\napp = Starlette(\n    routes=[\n        Mount(\"/echo\", echo_mcp.streamable_http_app(stateless_http=True, json_response=True)),\n        Mount(\"/math\", math_mcp.streamable_http_app(stateless_http=True, json_response=True)),\n    ],\n    lifespan=lifespan,\n)\n\n# Note: Clients connect to http://localhost:8000/echo/mcp and http://localhost:8000/math/mcp\n# To mount at the root of each path (e.g., /echo instead of /echo/mcp):\n# echo_mcp.streamable_http_app(streamable_http_path=\"/\", stateless_http=True, json_response=True)\n# math_mcp.streamable_http_app(streamable_http_path=\"/\", stateless_http=True, json_response=True)\n```\n\n_Full example: [examples/snippets/servers/streamable_starlette_mount.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/streamable_starlette_mount.py)_\n\u003c!-- /snippet-source --\u003e\n\nFor low level server with Streamable HTTP implementations, see:\n\n- Stateful server: [`examples/servers/simple-streamablehttp/`](examples/servers/simple-streamablehttp/)\n- Stateless server: [`examples/servers/simple-streamablehttp-stateless/`](examples/servers/simple-streamablehttp-stateless/)\n\nThe streamable HTTP transport supports:\n\n- Stateful and stateless operation modes\n- Resumability with event stores\n- JSON or SSE response formats\n- Better scalability for multi-node deployments\n\n#### CORS Configuration for Browser-Based Clients\n\nIf you'd like your server to be accessible by browser-based MCP clients, you'll need to configure CORS headers. The `Mcp-Session-Id` header must be exposed for browser clients to access it:\n\n```python\nfrom starlette.applications import Starlette\nfrom starlette.middleware.cors import CORSMiddleware\n\n# Create your Starlette app first\nstarlette_app = Starlette(routes=[...])\n\n# Then wrap it with CORS middleware\nstarlette_app = CORSMiddleware(\n    starlette_app,\n    allow_origins=[\"*\"],  # Configure appropriately for production\n    allow_methods=[\"GET\", \"POST\", \"DELETE\"],  # MCP streamable HTTP methods\n    expose_headers=[\"Mcp-Session-Id\"],\n)\n```\n\nThis configuration is necessary because:\n\n- The MCP streamable HTTP transport uses the `Mcp-Session-Id` header for session management\n- Browsers restrict access to response headers unless explicitly exposed via CORS\n- Without this configuration, browser-based clients won't be able to read the session ID from initialization responses\n\n### Mounting to an Existing ASGI Server\n\nBy default, SSE servers are mounted at `/sse` and Streamable HTTP servers are mounted at `/mcp`. You can customize these paths using the methods described below.\n\nFor more information on mounting applications in Starlette, see the [Starlette documentation](https://www.starlette.io/routing/#submounting-routes).\n\n#### StreamableHTTP servers\n\nYou can mount the StreamableHTTP server to an existing ASGI server using the `streamable_http_app` method. This allows you to integrate the StreamableHTTP server with other ASGI applications.\n\n##### Basic mounting\n\n\u003c!-- snippet-source examples/snippets/servers/streamable_http_basic_mounting.py --\u003e\n```python\n\"\"\"Basic example showing how to mount StreamableHTTP server in Starlette.\n\nRun from the repository root:\n    uvicorn examples.snippets.servers.streamable_http_basic_mounting:app --reload\n\"\"\"\n\nimport contextlib\n\nfrom starlette.applications import Starlette\nfrom starlette.routing import Mount\n\nfrom mcp.server.mcpserver import MCPServer\n\n# Create MCP server\nmcp = MCPServer(\"My App\")\n\n\n@mcp.tool()\ndef hello() -\u003e str:\n    \"\"\"A simple hello tool\"\"\"\n    return \"Hello from MCP!\"\n\n\n# Create a lifespan context manager to run the session manager\n@contextlib.asynccontextmanager\nasync def lifespan(app: Starlette):\n    async with mcp.session_manager.run():\n        yield\n\n\n# Mount the StreamableHTTP server to the existing ASGI server\n# Transport-specific options are passed to streamable_http_app()\napp = Starlette(\n    routes=[\n        Mount(\"/\", app=mcp.streamable_http_app(json_response=True)),\n    ],\n    lifespan=lifespan,\n)\n```\n\n_Full example: [examples/snippets/servers/streamable_http_basic_mounting.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/streamable_http_basic_mounting.py)_\n\u003c!-- /snippet-source --\u003e\n\n##### Host-based routing\n\n\u003c!-- snippet-source examples/snippets/servers/streamable_http_host_mounting.py --\u003e\n```python\n\"\"\"Example showing how to mount StreamableHTTP server using Host-based routing.\n\nRun from the repository root:\n    uvicorn examples.snippets.servers.streamable_http_host_mounting:app --reload\n\"\"\"\n\nimport contextlib\n\nfrom starlette.applications import Starlette\nfrom starlette.routing import Host\n\nfrom mcp.server.mcpserver import MCPServer\n\n# Create MCP server\nmcp = MCPServer(\"MCP Host App\")\n\n\n@mcp.tool()\ndef domain_info() -\u003e str:\n    \"\"\"Get domain-specific information\"\"\"\n    return \"This is served from mcp.acme.corp\"\n\n\n# Create a lifespan context manager to run the session manager\n@contextlib.asynccontextmanager\nasync def lifespan(app: Starlette):\n    async with mcp.session_manager.run():\n        yield\n\n\n# Mount using Host-based routing\n# Transport-specific options are passed to streamable_http_app()\napp = Starlette(\n    routes=[\n        Host(\"mcp.acme.corp\", app=mcp.streamable_http_app(json_response=True)),\n    ],\n    lifespan=lifespan,\n)\n```\n\n_Full example: [examples/snippets/servers/streamable_http_host_mounting.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/streamable_http_host_mounting.py)_\n\u003c!-- /snippet-source --\u003e\n\n##### Multiple servers with path configuration\n\n\u003c!-- snippet-source examples/snippets/servers/streamable_http_multiple_servers.py --\u003e\n```python\n\"\"\"Example showing how to mount multiple StreamableHTTP servers with path configuration.\n\nRun from the repository root:\n    uvicorn examples.snippets.servers.streamable_http_multiple_servers:app --reload\n\"\"\"\n\nimport contextlib\n\nfrom starlette.applications import Starlette\nfrom starlette.routing import Mount\n\nfrom mcp.server.mcpserver import MCPServer\n\n# Create multiple MCP servers\napi_mcp = MCPServer(\"API Server\")\nchat_mcp = MCPServer(\"Chat Server\")\n\n\n@api_mcp.tool()\ndef api_status() -\u003e str:\n    \"\"\"Get API status\"\"\"\n    return \"API is running\"\n\n\n@chat_mcp.tool()\ndef send_message(message: str) -\u003e str:\n    \"\"\"Send a chat message\"\"\"\n    return f\"Message sent: {message}\"\n\n\n# Create a combined lifespan to manage both session managers\n@contextlib.asynccontextmanager\nasync def lifespan(app: Starlette):\n    async with contextlib.AsyncExitStack() as stack:\n        await stack.enter_async_context(api_mcp.session_manager.run())\n        await stack.enter_async_context(chat_mcp.session_manager.run())\n        yield\n\n\n# Mount the servers with transport-specific options passed to streamable_http_app()\n# streamable_http_path=\"/\" means endpoints will be at /api and /chat instead of /api/mcp and /chat/mcp\napp = Starlette(\n    routes=[\n        Mount(\"/api\", app=api_mcp.streamable_http_app(json_response=True, streamable_http_path=\"/\")),\n        Mount(\"/chat\", app=chat_mcp.streamable_http_app(json_response=True, streamable_http_path=\"/\")),\n    ],\n    lifespan=lifespan,\n)\n```\n\n_Full example: [examples/snippets/servers/streamable_http_multiple_servers.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/streamable_http_multiple_servers.py)_\n\u003c!-- /snippet-source --\u003e\n\n##### Path configuration at initialization\n\n\u003c!-- snippet-source examples/snippets/servers/streamable_http_path_config.py --\u003e\n```python\n\"\"\"Example showing path configuration when mounting MCPServer.\n\nRun from the repository root:\n    uvicorn examples.snippets.servers.streamable_http_path_config:app --reload\n\"\"\"\n\nfrom starlette.applications import Starlette\nfrom starlette.routing import Mount\n\nfrom mcp.server.mcpserver import MCPServer\n\n# Create a simple MCPServer server\nmcp_at_root = MCPServer(\"My Server\")\n\n\n@mcp_at_root.tool()\ndef process_data(data: str) -\u003e str:\n    \"\"\"Process some data\"\"\"\n    return f\"Processed: {data}\"\n\n\n# Mount at /process with streamable_http_path=\"/\" so the endpoint is /process (not /process/mcp)\n# Transport-specific options like json_response are passed to streamable_http_app()\napp = Starlette(\n    routes=[\n        Mount(\n            \"/process\",\n            app=mcp_at_root.streamable_http_app(json_response=True, streamable_http_path=\"/\"),\n        ),\n    ]\n)\n```\n\n_Full example: [examples/snippets/servers/streamable_http_path_config.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/streamable_http_path_config.py)_\n\u003c!-- /snippet-source --\u003e\n\n#### SSE servers\n\n\u003e **Note**: SSE transport is being superseded by [Streamable HTTP transport](https://modelcontextprotocol.io/specification/2025-03-26/basic/transports#streamable-http).\n\nYou can mount the SSE server to an existing ASGI server using the `sse_app` method. This allows you to integrate the SSE server with other ASGI applications.\n\n```python\nfrom starlette.applications import Starlette\nfrom starlette.routing import Mount, Host\nfrom mcp.server.mcpserver import MCPServer\n\n\nmcp = MCPServer(\"My App\")\n\n# Mount the SSE server to the existing ASGI server\napp = Starlette(\n    routes=[\n        Mount('/', app=mcp.sse_app()),\n    ]\n)\n\n# or dynamically mount as host\napp.router.routes.append(Host('mcp.acme.corp', app=mcp.sse_app()))\n```\n\nYou can also mount multiple MCP servers at different sub-paths. The SSE transport automatically detects the mount path via ASGI's `root_path` mechanism, so message endpoints are correctly routed:\n\n```python\nfrom starlette.applications import Starlette\nfrom starlette.routing import Mount\nfrom mcp.server.mcpserver import MCPServer\n\n# Create multiple MCP servers\ngithub_mcp = MCPServer(\"GitHub API\")\nbrowser_mcp = MCPServer(\"Browser\")\nsearch_mcp = MCPServer(\"Search\")\n\n# Mount each server at its own sub-path\n# The SSE transport automatically uses ASGI's root_path to construct\n# the correct message endpoint (e.g., /github/messages/, /browser/messages/)\napp = Starlette(\n    routes=[\n        Mount(\"/github\", app=github_mcp.sse_app()),\n        Mount(\"/browser\", app=browser_mcp.sse_app()),\n        Mount(\"/search\", app=search_mcp.sse_app()),\n    ]\n)\n```\n\nFor more information on mounting applications in Starlette, see the [Starlette documentation](https://www.starlette.io/routing/#submounting-routes).\n\n## Advanced Usage\n\n### Low-Level Server\n\nFor more control, you can use the low-level server implementation directly. This gives you full access to the protocol and allows you to customize every aspect of your server, including lifecycle management through the lifespan API:\n\n\u003c!-- snippet-source examples/snippets/servers/lowlevel/lifespan.py --\u003e\n```python\n\"\"\"Run from the repository root:\nuv run examples/snippets/servers/lowlevel/lifespan.py\n\"\"\"\n\nfrom collections.abc import AsyncIterator\nfrom contextlib import asynccontextmanager\nfrom typing import Any\n\nimport mcp.server.stdio\nfrom mcp import types\nfrom mcp.server.lowlevel import NotificationOptions, Server\nfrom mcp.server.models import InitializationOptions\n\n\n# Mock database class for example\nclass Database:\n    \"\"\"Mock database class for example.\"\"\"\n\n    @classmethod\n    async def connect(cls) -\u003e \"Database\":\n        \"\"\"Connect to database.\"\"\"\n        print(\"Database connected\")\n        return cls()\n\n    async def disconnect(self) -\u003e None:\n        \"\"\"Disconnect from database.\"\"\"\n        print(\"Database disconnected\")\n\n    async def query(self, query_str: str) -\u003e list[dict[str, str]]:\n        \"\"\"Execute a query.\"\"\"\n        # Simulate database query\n        return [{\"id\": \"1\", \"name\": \"Example\", \"query\": query_str}]\n\n\n@asynccontextmanager\nasync def server_lifespan(_server: Server) -\u003e AsyncIterator[dict[str, Any]]:\n    \"\"\"Manage server startup and shutdown lifecycle.\"\"\"\n    # Initialize resources on startup\n    db = await Database.connect()\n    try:\n        yield {\"db\": db}\n    finally:\n        # Clean up on shutdown\n        await db.disconnect()\n\n\n# Pass lifespan to server\nserver = Server(\"example-server\", lifespan=server_lifespan)\n\n\n@server.list_tools()\nasync def handle_list_tools() -\u003e list[types.Tool]:\n    \"\"\"List available tools.\"\"\"\n    return [\n        types.Tool(\n            name=\"query_db\",\n            description=\"Query the database\",\n            input_schema={\n                \"type\": \"object\",\n                \"properties\": {\"query\": {\"type\": \"string\", \"description\": \"SQL query to execute\"}},\n                \"required\": [\"query\"],\n            },\n        )\n    ]\n\n\n@server.call_tool()\nasync def query_db(name: str, arguments: dict[str, Any]) -\u003e list[types.TextContent]:\n    \"\"\"Handle database query tool call.\"\"\"\n    if name != \"query_db\":\n        raise ValueError(f\"Unknown tool: {name}\")\n\n    # Access lifespan context\n    ctx = server.request_context\n    db = ctx.lifespan_context[\"db\"]\n\n    # Execute query\n    results = await db.query(arguments[\"query\"])\n\n    return [types.TextContent(type=\"text\", text=f\"Query results: {results}\")]\n\n\nasync def run():\n    \"\"\"Run the server with lifespan management.\"\"\"\n    async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):\n        await server.run(\n            read_stream,\n            write_stream,\n            InitializationOptions(\n                server_name=\"example-server\",\n                server_version=\"0.1.0\",\n                capabilities=server.get_capabilities(\n                    notification_options=NotificationOptions(),\n                    experimental_capabilities={},\n                ),\n            ),\n        )\n\n\nif __name__ == \"__main__\":\n    import asyncio\n\n    asyncio.run(run())\n```\n\n_Full example: [examples/snippets/servers/lowlevel/lifespan.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/lowlevel/lifespan.py)_\n\u003c!-- /snippet-source --\u003e\n\nThe lifespan API provides:\n\n- A way to initialize resources when the server starts and clean them up when it stops\n- Access to initialized resources through the request context in handlers\n- Type-safe context passing between lifespan and request handlers\n\n\u003c!-- snippet-source examples/snippets/servers/lowlevel/basic.py --\u003e\n```python\n\"\"\"Run from the repository root:\nuv run examples/snippets/servers/lowlevel/basic.py\n\"\"\"\n\nimport asyncio\n\nimport mcp.server.stdio\nfrom mcp import types\nfrom mcp.server.lowlevel import NotificationOptions, Server\nfrom mcp.server.models import InitializationOptions\n\n# Create a server instance\nserver = Server(\"example-server\")\n\n\n@server.list_prompts()\nasync def handle_list_prompts() -\u003e list[types.Prompt]:\n    \"\"\"List available prompts.\"\"\"\n    return [\n        types.Prompt(\n            name=\"example-prompt\",\n            description=\"An example prompt template\",\n            arguments=[types.PromptArgument(name=\"arg1\", description=\"Example argument\", required=True)],\n        )\n    ]\n\n\n@server.get_prompt()\nasync def handle_get_prompt(name: str, arguments: dict[str, str] | None) -\u003e types.GetPromptResult:\n    \"\"\"Get a specific prompt by name.\"\"\"\n    if name != \"example-prompt\":\n        raise ValueError(f\"Unknown prompt: {name}\")\n\n    arg1_value = (arguments or {}).get(\"arg1\", \"default\")\n\n    return types.GetPromptResult(\n        description=\"Example prompt\",\n        messages=[\n            types.PromptMessage(\n                role=\"user\",\n                content=types.TextContent(type=\"text\", text=f\"Example prompt text with argument: {arg1_value}\"),\n            )\n        ],\n    )\n\n\nasync def run():\n    \"\"\"Run the basic low-level server.\"\"\"\n    async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):\n        await server.run(\n            read_stream,\n            write_stream,\n            InitializationOptions(\n                server_name=\"example\",\n                server_version=\"0.1.0\",\n                capabilities=server.get_capabilities(\n                    notification_options=NotificationOptions(),\n                    experimental_capabilities={},\n                ),\n            ),\n        )\n\n\nif __name__ == \"__main__\":\n    asyncio.run(run())\n```\n\n_Full example: [examples/snippets/servers/lowlevel/basic.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/lowlevel/basic.py)_\n\u003c!-- /snippet-source --\u003e\n\nCaution: The `uv run mcp run` and `uv run mcp dev` tool doesn't support low-level server.\n\n#### Structured Output Support\n\nThe low-level server supports structured output for tools, allowing you to return both human-readable content and machine-readable structured data. Tools can define an `outputSchema` to validate their structured output:\n\n\u003c!-- snippet-source examples/snippets/servers/lowlevel/structured_output.py --\u003e\n```python\n\"\"\"Run from the repository root:\nuv run examples/snippets/servers/lowlevel/structured_output.py\n\"\"\"\n\nimport asyncio\nfrom typing import Any\n\nimport mcp.server.stdio\nfrom mcp import types\nfrom mcp.server.lowlevel import NotificationOptions, Server\nfrom mcp.server.models import InitializationOptions\n\nserver = Server(\"example-server\")\n\n\n@server.list_tools()\nasync def list_tools() -\u003e list[types.Tool]:\n    \"\"\"List available tools with structured output schemas.\"\"\"\n    return [\n        types.Tool(\n            name=\"get_weather\",\n            description=\"Get current weather for a city\",\n            input_schema={\n                \"type\": \"object\",\n                \"properties\": {\"city\": {\"type\": \"string\", \"description\": \"City name\"}},\n                \"required\": [\"city\"],\n            },\n            output_schema={\n                \"type\": \"object\",\n                \"properties\": {\n                    \"temperature\": {\"type\": \"number\", \"description\": \"Temperature in Celsius\"},\n                    \"condition\": {\"type\": \"string\", \"description\": \"Weather condition\"},\n                    \"humidity\": {\"type\": \"number\", \"description\": \"Humidity percentage\"},\n                    \"city\": {\"type\": \"string\", \"description\": \"City name\"},\n                },\n                \"required\": [\"temperature\", \"condition\", \"humidity\", \"city\"],\n            },\n        )\n    ]\n\n\n@server.call_tool()\nasync def call_tool(name: str, arguments: dict[str, Any]) -\u003e dict[str, Any]:\n    \"\"\"Handle tool calls with structured output.\"\"\"\n    if name == \"get_weather\":\n        city = arguments[\"city\"]\n\n        # Simulated weather data - in production, call a weather API\n        weather_data = {\n            \"temperature\": 22.5,\n            \"condition\": \"partly cloudy\",\n            \"humidity\": 65,\n            \"city\": city,  # Include the requested city\n        }\n\n        # low-level server will validate structured output against the tool's\n        # output schema, and additionally serialize it into a TextContent block\n        # for backwards compatibility with pre-2025-06-18 clients.\n        return weather_data\n    else:\n        raise ValueError(f\"Unknown tool: {name}\")\n\n\nasync def run():\n    \"\"\"Run the structured output server.\"\"\"\n    async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):\n        await server.run(\n            read_stream,\n            write_stream,\n            InitializationOptions(\n                server_name=\"structured-output-example\",\n                server_version=\"0.1.0\",\n                capabilities=server.get_capabilities(\n                    notification_options=NotificationOptions(),\n                    experimental_capabilities={},\n                ),\n            ),\n        )\n\n\nif __name__ == \"__main__\":\n    asyncio.run(run())\n```\n\n_Full example: [examples/snippets/servers/lowlevel/structured_output.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/lowlevel/structured_output.py)_\n\u003c!-- /snippet-source --\u003e\n\nTools can return data in four ways:\n\n1. **Content only**: Return a list of content blocks (default behavior before spec revision 2025-06-18)\n2. **Structured data only**: Return a dictionary that will be serialized to JSON (Introduced in spec revision 2025-06-18)\n3. **Both**: Return a tuple of (content, structured_data) preferred option to use for backwards compatibility\n4. **Direct CallToolResult**: Return `CallToolResult` directly for full control (including `_meta` field)\n\nWhen an `outputSchema` is defined, the server automatically validates the structured output against the schema. This ensures type safety and helps catch errors early.\n\n##### Returning CallToolResult Directly\n\nFor full control over the response including the `_meta` field (for passing data to client applications without exposing it to the model), return `CallToolResult` directly:\n\n\u003c!-- snippet-source examples/snippets/servers/lowlevel/direct_call_tool_result.py --\u003e\n```python\n\"\"\"Run from the repository root:\nuv run examples/snippets/servers/lowlevel/direct_call_tool_result.py\n\"\"\"\n\nimport asyncio\nfrom typing import Any\n\nimport mcp.server.stdio\nfrom mcp import types\nfrom mcp.server.lowlevel import NotificationOptions, Server\nfrom mcp.server.models import InitializationOptions\n\nserver = Server(\"example-server\")\n\n\n@server.list_tools()\nasync def list_tools() -\u003e list[types.Tool]:\n    \"\"\"List available tools.\"\"\"\n    return [\n        types.Tool(\n            name=\"advanced_tool\",\n            description=\"Tool with full control including _meta field\",\n            input_schema={\n                \"type\": \"object\",\n                \"properties\": {\"message\": {\"type\": \"string\"}},\n                \"required\": [\"message\"],\n            },\n        )\n    ]\n\n\n@server.call_tool()\nasync def handle_call_tool(name: str, arguments: dict[str, Any]) -\u003e types.CallToolResult:\n    \"\"\"Handle tool calls by returning CallToolResult directly.\"\"\"\n    if name == \"advanced_tool\":\n        message = str(arguments.get(\"message\", \"\"))\n        return types.CallToolResult(\n            content=[types.TextContent(type=\"text\", text=f\"Processed: {message}\")],\n            structured_content={\"result\": \"success\", \"message\": message},\n            _meta={\"hidden\": \"data for client applications only\"},\n        )\n\n    raise ValueError(f\"Unknown tool: {name}\")\n\n\nasync def run():\n    \"\"\"Run the server.\"\"\"\n    async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):\n        await server.run(\n            read_stream,\n            write_stream,\n            InitializationOptions(\n                server_name=\"example\",\n                server_version=\"0.1.0\",\n                capabilities=server.get_capabilities(\n                    notification_options=NotificationOptions(),\n                    experimental_capabilities={},\n                ),\n            ),\n        )\n\n\nif __name__ == \"__main__\":\n    asyncio.run(run())\n```\n\n_Full example: [examples/snippets/servers/lowlevel/direct_call_tool_result.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/lowlevel/direct_call_tool_result.py)_\n\u003c!-- /snippet-source --\u003e\n\n**Note:** When returning `CallToolResult`, you bypass the automatic content/structured conversion. You must construct the complete response yourself.\n\n### Pagination (Advanced)\n\nFor servers that need to handle large datasets, the low-level server provides paginated versions of list operations. This is an optional optimization - most servers won't need pagination unless they're dealing with hundreds or thousands of items.\n\n#### Server-side Implementation\n\n\u003c!-- snippet-source examples/snippets/servers/pagination_example.py --\u003e\n```python\n\"\"\"Example of implementing pagination with MCP server decorators.\"\"\"\n\nfrom mcp import types\nfrom mcp.server.lowlevel import Server\n\n# Initialize the server\nserver = Server(\"paginated-server\")\n\n# Sample data to paginate\nITEMS = [f\"Item {i}\" for i in range(1, 101)]  # 100 items\n\n\n@server.list_resources()\nasync def list_resources_paginated(request: types.ListResourcesRequest) -\u003e types.ListResourcesResult:\n    \"\"\"List resources with pagination support.\"\"\"\n    page_size = 10\n\n    # Extract cursor from request params\n    cursor = request.params.cursor if request.params is not None else None\n\n    # Parse cursor to get offset\n    start = 0 if cursor is None else int(cursor)\n    end = start + page_size\n\n    # Get page of resources\n    page_items = [\n        types.Resource(uri=f\"resource://items/{item}\", name=item, description=f\"Description for {item}\")\n        for item in ITEMS[start:end]\n    ]\n\n    # Determine next cursor\n    next_cursor = str(end) if end \u003c len(ITEMS) else None\n\n    return types.ListResourcesResult(resources=page_items, next_cursor=next_cursor)\n```\n\n_Full example: [examples/snippets/servers/pagination_example.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/pagination_example.py)_\n\u003c!-- /snippet-source --\u003e\n\n#### Client-side Consumption\n\n\u003c!-- snippet-source examples/snippets/clients/pagination_client.py --\u003e\n```python\n\"\"\"Example of consuming paginated MCP endpoints from a client.\"\"\"\n\nimport asyncio\n\nfrom mcp.client.session import ClientSession\nfrom mcp.client.stdio import StdioServerParameters, stdio_client\nfrom mcp.types import PaginatedRequestParams, Resource\n\n\nasync def list_all_resources() -\u003e None:\n    \"\"\"Fetch all resources using pagination.\"\"\"\n    async with stdio_client(StdioServerParameters(command=\"uv\", args=[\"run\", \"mcp-simple-pagination\"])) as (\n        read,\n        write,\n    ):\n        async with ClientSession(read, write) as session:\n            await session.initialize()\n\n            all_resources: list[Resource] = []\n            cursor = None\n\n            while True:\n                # Fetch a page of resources\n                result = await session.list_resources(params=PaginatedRequestParams(cursor=cursor))\n                all_resources.extend(result.resources)\n\n                print(f\"Fetched {len(result.resources)} resources\")\n\n                # Check if there are more pages\n                if result.next_cursor:\n                    cursor = result.next_cursor\n                else:\n                    break\n\n            print(f\"Total resources: {len(all_resources)}\")\n\n\nif __name__ == \"__main__\":\n    asyncio.run(list_all_resources())\n```\n\n_Full example: [examples/snippets/clients/pagination_client.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/clients/pagination_client.py)_\n\u003c!-- /snippet-source --\u003e\n\n#### Key Points\n\n- **Cursors are opaque strings** - the server defines the format (numeric offsets, timestamps, etc.)\n- **Return `nextCursor=None`** when there are no more pages\n- **Backward compatible** - clients that don't support pagination will still work (they'll just get the first page)\n- **Flexible page sizes** - Each endpoint can define its own page size based on data characteristics\n\nSee the [simple-pagination example](examples/servers/simple-pagination) for a complete implementation.\n\n### Writing MCP Clients\n\nThe SDK provides a high-level client interface for connecting to MCP servers using various [transports](https://modelcontextprotocol.io/specification/2025-03-26/basic/transports):\n\n\u003c!-- snippet-source examples/snippets/clients/stdio_client.py --\u003e\n```python\n\"\"\"cd to the `examples/snippets/clients` directory and run:\nuv run client\n\"\"\"\n\nimport asyncio\nimport os\n\nfrom mcp import ClientSession, StdioServerParameters, types\nfrom mcp.client.context import ClientRequestContext\nfrom mcp.client.stdio import stdio_client\n\n# Create server parameters for stdio connection\nserver_params = StdioServerParameters(\n    command=\"uv\",  # Using uv to run the server\n    args=[\"run\", \"server\", \"mcpserver_quickstart\", \"stdio\"],  # We're already in snippets dir\n    env={\"UV_INDEX\": os.environ.get(\"UV_INDEX\", \"\")},\n)\n\n\n# Optional: create a sampling callback\nasync def handle_sampling_message(\n    context: ClientRequestContext, params: types.CreateMessageRequestParams\n) -\u003e types.CreateMessageResult:\n    print(f\"Sampling request: {params.messages}\")\n    return types.CreateMessageResult(\n        role=\"assistant\",\n        content=types.TextContent(\n            type=\"text\",\n            text=\"Hello, world! from model\",\n        ),\n        model=\"gpt-3.5-turbo\",\n        stop_reason=\"endTurn\",\n    )\n\n\nasync def run():\n    async with stdio_client(server_params) as (read, write):\n        async with ClientSession(read, write, sampling_callback=handle_sampling_message) as session:\n            # Initialize the connection\n            await session.initialize()\n\n            # List available prompts\n            prompts = await session.list_prompts()\n            print(f\"Available prompts: {[p.name for p in prompts.prompts]}\")\n\n            # Get a prompt (greet_user prompt from mcpserver_quickstart)\n            if prompts.prompts:\n                prompt = await session.get_prompt(\"greet_user\", arguments={\"name\": \"Alice\", \"style\": \"friendly\"})\n                print(f\"Prompt result: {prompt.messages[0].content}\")\n\n            # List available resources\n            resources = await session.list_resources()\n            print(f\"Available resources: {[r.uri for r in resources.resources]}\")\n\n            # List available tools\n            tools = await session.list_tools()\n            print(f\"Available tools: {[t.name for t in tools.tools]}\")\n\n            # Read a resource (greeting resource from mcpserver_quickstart)\n            resource_content = await session.read_resource(\"greeting://World\")\n            content_block = resource_content.contents[0]\n            if isinstance(content_block, types.TextContent):\n                print(f\"Resource content: {content_block.text}\")\n\n            # Call a tool (add tool from mcpserver_quickstart)\n            result = await session.call_tool(\"add\", arguments={\"a\": 5, \"b\": 3})\n            result_unstructured = result.content[0]\n            if isinstance(result_unstructured, types.TextContent):\n                print(f\"Tool result: {result_unstructured.text}\")\n            result_structured = result.structured_content\n            print(f\"Structured tool result: {result_structured}\")\n\n\ndef main():\n    \"\"\"Entry point for the client script.\"\"\"\n    asyncio.run(run())\n\n\nif __name__ == \"__main__\":\n    main()\n```\n\n_Full example: [examples/snippets/clients/stdio_client.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/clients/stdio_client.py)_\n\u003c!-- /snippet-source --\u003e\n\nClients can also connect using [Streamable HTTP transport](https://modelcontextprotocol.io/specification/2025-03-26/basic/transports#streamable-http):\n\n\u003c!-- snippet-source examples/snippets/clients/streamable_basic.py --\u003e\n```python\n\"\"\"Run from the repository root:\nuv run examples/snippets/clients/streamable_basic.py\n\"\"\"\n\nimport asyncio\n\nfrom mcp import ClientSession\nfrom mcp.client.streamable_http import streamable_http_client\n\n\nasync def main():\n    # Connect to a streamable HTTP server\n    async with streamable_http_client(\"http://localhost:8000/mcp\") as (\n        read_stream,\n        write_stream,\n        _,\n    ):\n        # Create a session using the client streams\n        async with ClientSession(read_stream, write_stream) as session:\n            # Initialize the connection\n            await session.initialize()\n            # List available tools\n            tools = await session.list_tools()\n            print(f\"Available tools: {[tool.name for tool in tools.tools]}\")\n\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n_Full example: [examples/snippets/clients/streamable_basic.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/clients/streamable_basic.py)_\n\u003c!-- /snippet-source --\u003e\n\n### Client Display Utilities\n\nWhen building MCP clients, the SDK provides utilities to help display human-readable names for tools, resources, and prompts:\n\n\u003c!-- snippet-source examples/snippets/clients/display_utilities.py --\u003e\n```python\n\"\"\"cd to the `examples/snippets` directory and run:\nuv run display-utilities-client\n\"\"\"\n\nimport asyncio\nimport os\n\nfrom mcp import ClientSession, StdioServerParameters\nfrom mcp.client.stdio import stdio_client\nfrom mcp.shared.metadata_utils import get_display_name\n\n# Create server parameters for stdio connection\nserver_params = StdioServerParameters(\n    command=\"uv\",  # Using uv to run the server\n    args=[\"run\", \"server\", \"mcpserver_quickstart\", \"stdio\"],\n    env={\"UV_INDEX\": os.environ.get(\"UV_INDEX\", \"\")},\n)\n\n\nasync def display_tools(session: ClientSession):\n    \"\"\"Display available tools with human-readable names\"\"\"\n    tools_response = await session.list_tools()\n\n    for tool in tools_response.tools:\n        # get_display_name() returns the title if available, otherwise the name\n        display_name = get_display_name(tool)\n        print(f\"Tool: {display_name}\")\n        if tool.description:\n            print(f\"   {tool.description}\")\n\n\nasync def display_resources(session: ClientSession):\n    \"\"\"Display available resources with human-readable names\"\"\"\n    resources_response = await session.list_resources()\n\n    for resource in resources_response.resources:\n        display_name = get_display_name(resource)\n        print(f\"Resource: {display_name} ({resource.uri})\")\n\n    templates_response = await session.list_resource_templates()\n    for template in templates_response.resource_templates:\n        display_name = get_display_name(template)\n        print(f\"Resource Template: {display_name}\")\n\n\nasync def run():\n    \"\"\"Run the display utilities example.\"\"\"\n    async with stdio_client(server_params) as (read, write):\n        async with ClientSession(read, write) as session:\n            # Initialize the connection\n            await session.initialize()\n\n            print(\"=== Available Tools ===\")\n            await display_tools(session)\n\n            print(\"\\n=== Available Resources ===\")\n            await display_resources(session)\n\n\ndef main():\n    \"\"\"Entry point for the display utilities client.\"\"\"\n    asyncio.run(run())\n\n\nif __name__ == \"__main__\":\n    main()\n```\n\n_Full example: [examples/snippets/clients/display_utilities.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/clients/display_utilities.py)_\n\u003c!-- /snippet-source --\u003e\n\nThe `get_display_name()` function implements the proper precedence rules for displaying names:\n\n- For tools: `title` \u003e `annotations.title` \u003e `name`\n- For other objects: `title` \u003e `name`\n\nThis ensures your client UI shows the most user-friendly names that servers provide.\n\n### OAuth Authentication for Clients\n\nThe SDK includes [authorization support](https://modelcontextprotocol.io/specification/2025-03-26/basic/authorization) for connecting to protected MCP servers:\n\n\u003c!-- snippet-source examples/snippets/clients/oauth_client.py --\u003e\n```python\n\"\"\"Before running, specify running MCP RS server URL.\nTo spin up RS server locally, see\n    examples/servers/simple-auth/README.md\n\ncd to the `examples/snippets` directory and run:\n    uv run oauth-client\n\"\"\"\n\nimport asyncio\nfrom urllib.parse import parse_qs, urlparse\n\nimport httpx\nfrom pydantic import AnyUrl\n\nfrom mcp import ClientSession\nfrom mcp.client.auth import OAuthClientProvider, TokenStorage\nfrom mcp.client.streamable_http import streamable_http_client\nfrom mcp.shared.auth import OAuthClientInformationFull, OAuthClientMetadata, OAuthToken\n\n\nclass InMemoryTokenStorage(TokenStorage):\n    \"\"\"Demo In-memory token storage implementation.\"\"\"\n\n    def __init__(self):\n        self.tokens: OAuthToken | None = None\n        self.client_info: OAuthClientInformationFull | None = None\n\n    async def get_tokens(self) -\u003e OAuthToken | None:\n        \"\"\"Get stored tokens.\"\"\"\n        return self.tokens\n\n    async def set_tokens(self, tokens: OAuthToken) -\u003e None:\n        \"\"\"Store tokens.\"\"\"\n        self.tokens = tokens\n\n    async def get_client_info(self) -\u003e OAuthClientInformationFull | None:\n        \"\"\"Get stored client information.\"\"\"\n        return self.client_info\n\n    async def set_client_info(self, client_info: OAuthClientInformationFull) -\u003e None:\n        \"\"\"Store client information.\"\"\"\n        self.client_info = client_info\n\n\nasync def handle_redirect(auth_url: str) -\u003e None:\n    print(f\"Visit: {auth_url}\")\n\n\nasync def handle_callback() -\u003e tuple[str, str | None]:\n    callback_url = input(\"Paste callback URL: \")\n    params = parse_qs(urlparse(callback_url).query)\n    return params[\"code\"][0], params.get(\"state\", [None])[0]\n\n\nasync def main():\n    \"\"\"Run the OAuth client example.\"\"\"\n    oauth_auth = OAuthClientProvider(\n        server_url=\"http://localhost:8001\",\n        client_metadata=OAuthClientMetadata(\n            client_name=\"Example MCP Client\",\n            redirect_uris=[AnyUrl(\"http://localhost:3000/callback\")],\n            grant_types=[\"authorization_code\", \"refresh_token\"],\n            response_types=[\"code\"],\n            scope=\"user\",\n        ),\n        storage=InMemoryTokenStorage(),\n        redirect_handler=handle_redirect,\n        callback_handler=handle_callback,\n    )\n\n    async with httpx.AsyncClient(auth=oauth_auth, follow_redirects=True) as custom_client:\n        async with streamable_http_client(\"http://localhost:8001/mcp\", http_client=custom_client) as (read, write, _):\n            async with ClientSession(read, write) as session:\n                await session.initialize()\n\n                tools = await session.list_tools()\n                print(f\"Available tools: {[tool.name for tool in tools.tools]}\")\n\n                resources = await session.list_resources()\n                print(f\"Available resources: {[r.uri for r in resources.resources]}\")\n\n\ndef run():\n    asyncio.run(main())\n\n\nif __name__ == \"__main__\":\n    run()\n```\n\n_Full example: [examples/snippets/clients/oauth_client.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/clients/oauth_client.py)_\n\u003c!-- /snippet-source --\u003e\n\nFor a complete working example, see [`examples/clients/simple-auth-client/`](examples/clients/simple-auth-client/).\n\n### Parsing Tool Results\n\nWhen calling tools through MCP, the `CallToolResult` object contains the tool's response in a structured format. Understanding how to parse this result is essential for properly handling tool outputs.\n\n```python\n\"\"\"examples/snippets/clients/parsing_tool_results.py\"\"\"\n\nimport asyncio\n\nfrom mcp import ClientSession, StdioServerParameters, types\nfrom mcp.client.stdio import stdio_client\n\n\nasync def parse_tool_results():\n    \"\"\"Demonstrates how to parse different types of content in CallToolResult.\"\"\"\n    server_params = StdioServerParameters(\n        command=\"python\", args=[\"path/to/mcp_server.py\"]\n    )\n\n    async with stdio_client(server_params) as (read, write):\n        async with ClientSession(read, write) as session:\n            await session.initialize()\n\n            # Example 1: Parsing text content\n            result = await session.call_tool(\"get_data\", {\"format\": \"text\"})\n            for content in result.content:\n                if isinstance(content, types.TextContent):\n                    print(f\"Text: {content.text}\")\n\n            # Example 2: Parsing structured content from JSON tools\n            result = await session.call_tool(\"get_user\", {\"id\": \"123\"})\n            if hasattr(result, \"structuredContent\") and result.structuredContent:\n                # Access structured data directly\n                user_data = result.structuredContent\n                print(f\"User: {user_data.get('name')}, Age: {user_data.get('age')}\")\n\n            # Example 3: Parsing embedded resources\n            result = await session.call_tool(\"read_config\", {})\n            for content in result.content:\n                if isinstance(content, types.EmbeddedResource):\n                    resource = content.resource\n                    if isinstance(resource, types.TextResourceContents):\n                        print(f\"Config from {resource.uri}: {resource.text}\")\n                    elif isinstance(resource, types.BlobResourceContents):\n                        print(f\"Binary data from {resource.uri}\")\n\n            # Example 4: Parsing image content\n            result = await session.call_tool(\"generate_chart\", {\"data\": [1, 2, 3]})\n            for content in result.content:\n                if isinstance(content, types.ImageContent):\n                    print(f\"Image ({content.mimeType}): {len(content.data)} bytes\")\n\n            # Example 5: Handling errors\n            result = await session.call_tool(\"failing_tool\", {})\n            if result.isError:\n                print(\"Tool execution failed!\")\n                for content in result.content:\n                    if isinstance(content, types.TextContent):\n                        print(f\"Error: {content.text}\")\n\n\nasync def main():\n    await parse_tool_results()\n\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n### MCP Primitives\n\nThe MCP protocol defines three core primitives that servers can implement:\n\n| Primitive | Control               | Description                                         | Example Use                  |\n|-----------|-----------------------|-----------------------------------------------------|------------------------------|\n| Prompts   | User-controlled       | Interactive templates invoked by user choice        | Slash commands, menu options |\n| Resources | Application-controlled| Contextual data managed by the client application   | File contents, API responses |\n| Tools     | Model-controlled      | Functions exposed to the LLM to take actions        | API calls, data updates      |\n\n### Server Capabilities\n\nMCP servers declare capabilities during initialization:\n\n| Capability   | Feature Flag                 | Description                        |\n|--------------|------------------------------|------------------------------------|\n| `prompts`    | `listChanged`                | Prompt template management         |\n| `resources`  | `subscribe`\u003cbr/\u003e`listChanged`| Resource exposure and updates      |\n| `tools`      | `listChanged`                | Tool discovery and execution       |\n| `logging`    | -                            | Server logging configuration       |\n| `completions`| -                            | Argument completion suggestions    |\n\n## Documentation\n\n- [API Reference](https://modelcontextprotocol.github.io/python-sdk/api/)\n- [Experimental Features (Tasks)](https://modelcontextprotocol.github.io/python-sdk/experimental/tasks/)\n- [Model Context Protocol documentation](https://modelcontextprotocol.io)\n- [Model Context Protocol specification](https://modelcontextprotocol.io/specification/latest)\n- [Officially supported servers](https://github.com/modelcontextprotocol/servers)\n\n## Contributing\n\nWe are passionate about supporting contributors of all levels of experience and would love to see you get involved in the project. See the [contributing guide](CONTRIBUTING.md) to get started.\n\n## License\n\nThis project is licensed under the MIT License - see the LICENSE file for details.\n","funding_links":[],"categories":["LLMs and ChatGPT"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/modelcontextprotocol.github.io%2Fpython-sdk%2F","html_url":"https://awesome.ecosyste.ms/projects/modelcontextprotocol.github.io%2Fpython-sdk%2F","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/modelcontextprotocol.github.io%2Fpython-sdk%2F/lists"}