{"id":26718541,"url":"https://github.com/modelcontextprotocol/swift-sdk","last_synced_at":"2025-12-11T23:00:48.478Z","repository":{"id":277066753,"uuid":"927967893","full_name":"modelcontextprotocol/swift-sdk","owner":"modelcontextprotocol","description":"The official Swift SDK for Model Context Protocol servers and clients. Maintained in collaboration with @loopwork-ai.","archived":false,"fork":false,"pushed_at":"2025-05-13T21:19:07.000Z","size":222,"stargazers_count":526,"open_issues_count":24,"forks_count":52,"subscribers_count":11,"default_branch":"main","last_synced_at":"2025-05-13T22:44:59.616Z","etag":null,"topics":["mcp","swift"],"latest_commit_sha":null,"homepage":"","language":"Swift","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/modelcontextprotocol.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2025-02-05T20:50:16.000Z","updated_at":"2025-05-13T22:09:28.000Z","dependencies_parsed_at":"2025-03-27T15:26:22.102Z","dependency_job_id":"1aa6b782-bfa9-4a86-8b74-a7bf3522415e","html_url":"https://github.com/modelcontextprotocol/swift-sdk","commit_stats":null,"previous_names":["loopwork-ai/mcp-swift-sdk","modelcontextprotocol/swift-sdk"],"tags_count":16,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/modelcontextprotocol%2Fswift-sdk","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/modelcontextprotocol%2Fswift-sdk/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/modelcontextprotocol%2Fswift-sdk/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/modelcontextprotocol%2Fswift-sdk/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/modelcontextprotocol","download_url":"https://codeload.github.com/modelcontextprotocol/swift-sdk/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254464890,"owners_count":22075570,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["mcp","swift"],"created_at":"2025-03-27T17:19:57.045Z","updated_at":"2025-12-11T23:00:48.439Z","avatar_url":"https://github.com/modelcontextprotocol.png","language":"Swift","readme":"# MCP Swift SDK\n\nOfficial Swift SDK for the [Model Context Protocol][mcp] (MCP).\n\n## Overview\n\nThe Model Context Protocol (MCP) defines a standardized way\nfor applications to communicate with AI and ML models.\nThis Swift SDK implements both client and server components\naccording to the [2025-03-26][mcp-spec-2025-03-26] (latest) version \nof the MCP specification.\n\n## Requirements\n\n- Swift 6.0+ (Xcode 16+)\n\nSee the [Platform Availability](#platform-availability) section below\nfor platform-specific requirements.\n\n## Installation\n\n### Swift Package Manager\n\nAdd the following to your `Package.swift` file:\n\n```swift\ndependencies: [\n    .package(url: \"https://github.com/modelcontextprotocol/swift-sdk.git\", from: \"0.10.0\")\n]\n```\n\nThen add the dependency to your target:\n\n```swift\n.target(\n    name: \"YourTarget\",\n    dependencies: [\n        .product(name: \"MCP\", package: \"swift-sdk\")\n    ]\n)\n```\n\n## Client Usage\n\nThe client component allows your application to connect to MCP servers.\n\n### Basic Client Setup\n\n```swift\nimport MCP\n\n// Initialize the client\nlet client = Client(name: \"MyApp\", version: \"1.0.0\")\n\n// Create a transport and connect\nlet transport = StdioTransport()\nlet result = try await client.connect(transport: transport)\n\n// Check server capabilities\nif result.capabilities.tools != nil {\n    // Server supports tools (implicitly including tool calling if the 'tools' capability object is present)\n}\n```\n\n\u003e [!NOTE]\n\u003e The `Client.connect(transport:)` method returns the initialization result.\n\u003e This return value is discardable, \n\u003e so you can ignore it if you don't need to check server capabilities.\n\n### Transport Options for Clients\n\n#### Stdio Transport\n\nFor local subprocess communication:\n\n```swift\n// Create a stdio transport (simplest option)\nlet transport = StdioTransport()\ntry await client.connect(transport: transport)\n```\n\n#### HTTP Transport\n\nFor remote server communication:\n\n```swift\n// Create a streaming HTTP transport\nlet transport = HTTPClientTransport(\n    endpoint: URL(string: \"http://localhost:8080\")!,\n    streaming: true  // Enable Server-Sent Events for real-time updates\n)\ntry await client.connect(transport: transport)\n```\n\n### Tools\n\nTools represent functions that can be called by the client:\n\n```swift\n// List available tools\nlet (tools, cursor) = try await client.listTools()\nprint(\"Available tools: \\(tools.map { $0.name }.joined(separator: \", \"))\")\n\n// Call a tool with arguments\nlet (content, isError) = try await client.callTool(\n    name: \"image-generator\",\n    arguments: [\n        \"prompt\": \"A serene mountain landscape at sunset\",\n        \"style\": \"photorealistic\",\n        \"width\": 1024,\n        \"height\": 768\n    ]\n)\n\n// Handle tool content\nfor item in content {\n    switch item {\n    case .text(let text):\n        print(\"Generated text: \\(text)\")\n    case .image(let data, let mimeType, let metadata):\n        if let width = metadata?[\"width\"] as? Int,\n           let height = metadata?[\"height\"] as? Int {\n            print(\"Generated \\(width)x\\(height) image of type \\(mimeType)\")\n            // Save or display the image data\n        }\n    case .audio(let data, let mimeType):\n        print(\"Received audio data of type \\(mimeType)\")\n    case .resource(let uri, let mimeType, let text):\n        print(\"Received resource from \\(uri) of type \\(mimeType)\")\n        if let text = text {\n            print(\"Resource text: \\(text)\")\n        }\n    }\n}\n```\n\n### Resources\n\nResources represent data that can be accessed and potentially subscribed to:\n\n```swift\n// List available resources\nlet (resources, nextCursor) = try await client.listResources()\nprint(\"Available resources: \\(resources.map { $0.uri }.joined(separator: \", \"))\")\n\n// Read a resource\nlet contents = try await client.readResource(uri: \"resource://example\")\nprint(\"Resource content: \\(contents)\")\n\n// Subscribe to resource updates if supported\nif result.capabilities.resources.subscribe {\n    try await client.subscribeToResource(uri: \"resource://example\")\n\n    // Register notification handler\n    await client.onNotification(ResourceUpdatedNotification.self) { message in\n        let uri = message.params.uri\n        print(\"Resource \\(uri) updated with new content\")\n\n        // Fetch the updated resource content\n        let updatedContents = try await client.readResource(uri: uri)\n        print(\"Updated resource content received\")\n    }\n}\n```\n\n### Prompts\n\nPrompts represent templated conversation starters:\n\n```swift\n// List available prompts\nlet (prompts, nextCursor) = try await client.listPrompts()\nprint(\"Available prompts: \\(prompts.map { $0.name }.joined(separator: \", \"))\")\n\n// Get a prompt with arguments\nlet (description, messages) = try await client.getPrompt(\n    name: \"customer-service\",\n    arguments: [\n        \"customerName\": \"Alice\",\n        \"orderNumber\": \"ORD-12345\",\n        \"issue\": \"delivery delay\"\n    ]\n)\n\n// Use the prompt messages in your application\nprint(\"Prompt description: \\(description)\")\nfor message in messages {\n    if case .text(text: let text) = message.content {\n        print(\"\\(message.role): \\(text)\")\n    }\n}\n```\n\n### Sampling\n\nSampling allows servers to request LLM completions through the client, \nenabling agentic behaviors while maintaining human-in-the-loop control. \nClients register a handler to process incoming sampling requests from servers.\n\n\u003e [!TIP]\n\u003e Sampling requests flow from **server to client**, \n\u003e not client to server. \n\u003e This enables servers to request AI assistance \n\u003e while clients maintain control over model access and user approval.\n\n```swift\n// Register a sampling handler in the client\nawait client.withSamplingHandler { parameters in\n    // Review the sampling request (human-in-the-loop step 1)\n    print(\"Server requests completion for: \\(parameters.messages)\")\n    \n    // Optionally modify the request based on user input\n    var messages = parameters.messages\n    if let systemPrompt = parameters.systemPrompt {\n        print(\"System prompt: \\(systemPrompt)\")\n    }\n    \n    // Sample from your LLM (this is where you'd call your AI service)\n    let completion = try await callYourLLMService(\n        messages: messages,\n        maxTokens: parameters.maxTokens,\n        temperature: parameters.temperature\n    )\n    \n    // Review the completion (human-in-the-loop step 2)\n    print(\"LLM generated: \\(completion)\")\n    // User can approve, modify, or reject the completion here\n    \n    // Return the result to the server\n    return CreateSamplingMessage.Result(\n        model: \"your-model-name\",\n        stopReason: .endTurn,\n        role: .assistant,\n        content: .text(completion)\n    )\n}\n```\n\nThe sampling flow follows these steps:\n\n```mermaid\nsequenceDiagram\n    participant S as MCP Server\n    participant C as MCP Client\n    participant U as User/Human\n    participant L as LLM Service\n\n    Note over S,L: Server-initiated sampling request\n    S-\u003e\u003eC: sampling/createMessage request\n    Note right of S: Server needs AI assistance\u003cbr/\u003efor decision or content\n\n    Note over C,U: Human-in-the-loop review #1\n    C-\u003e\u003eU: Show sampling request\n    U-\u003e\u003eU: Review \u0026 optionally modify\u003cbr/\u003emessages, system prompt\n    U-\u003e\u003eC: Approve request\n\n    Note over C,L: Client handles LLM interaction\n    C-\u003e\u003eL: Send messages to LLM\n    L-\u003e\u003eC: Return completion\n\n    Note over C,U: Human-in-the-loop review #2\n    C-\u003e\u003eU: Show LLM completion\n    U-\u003e\u003eU: Review \u0026 optionally modify\u003cbr/\u003eor reject completion\n    U-\u003e\u003eC: Approve completion\n\n    Note over C,S: Return result to server\n    C-\u003e\u003eS: sampling/createMessage response\n    Note left of C: Contains model used,\u003cbr/\u003estop reason, final content\n\n    Note over S: Server continues with\u003cbr/\u003eAI-assisted result\n```\n\nThis human-in-the-loop design ensures that users \nmaintain control over what the LLM sees and generates, \neven when servers initiate the requests.\n\n### Error Handling\n\nHandle common client errors:\n\n```swift\ndo {\n    try await client.connect(transport: transport)\n    // Success\n} catch let error as MCPError {\n    print(\"MCP Error: \\(error.localizedDescription)\")\n} catch {\n    print(\"Unexpected error: \\(error)\")\n}\n```\n\n### Advanced Client Features\n\n#### Strict vs Non-Strict Configuration\n\nConfigure client behavior for capability checking:\n\n```swift\n// Strict configuration - fail fast if a capability is missing\nlet strictClient = Client(\n    name: \"StrictClient\",\n    version: \"1.0.0\",\n    configuration: .strict\n)\n\n// With strict configuration, calling a method for an unsupported capability\n// will throw an error immediately without sending a request\ndo {\n    // This will throw an error if resources.list capability is not available\n    let resources = try await strictClient.listResources()\n} catch let error as MCPError {\n    print(\"Capability not available: \\(error.localizedDescription)\")\n}\n\n// Default (non-strict) configuration - attempt the request anyway\nlet client = Client(\n    name: \"FlexibleClient\",\n    version: \"1.0.0\",\n    configuration: .default\n)\n\n// With default configuration, the client will attempt the request\n// even if the capability wasn't advertised by the server\ndo {\n    let resources = try await client.listResources()\n} catch let error as MCPError {\n    // Still handle the error if the server rejects the request\n    print(\"Server rejected request: \\(error.localizedDescription)\")\n}\n```\n\n#### Request Batching\n\nImprove performance by sending multiple requests in a single batch:\n\n```swift\n// Array to hold tool call tasks\nvar toolTasks: [Task\u003cCallTool.Result, Swift.Error\u003e] = []\n\n// Send a batch of requests\ntry await client.withBatch { batch in\n    // Add multiple tool calls to the batch\n    for i in 0..\u003c10 {\n        toolTasks.append(\n            try await batch.addRequest(\n                CallTool.request(.init(name: \"square\", arguments: [\"n\": Value(i)]))\n            )\n        )\n    }\n}\n\n// Process results after the batch is sent\nprint(\"Processing \\(toolTasks.count) tool results...\")\nfor (index, task) in toolTasks.enumerated() {\n    do {\n        let result = try await task.value\n        print(\"\\(index): \\(result.content)\")\n    } catch {\n        print(\"\\(index) failed: \\(error)\")\n    }\n}\n```\n\nYou can also batch different types of requests:\n\n```swift\n// Declare task variables\nvar pingTask: Task\u003cPing.Result, Error\u003e?\nvar promptTask: Task\u003cGetPrompt.Result, Error\u003e?\n\n// Send a batch with different request types\ntry await client.withBatch { batch in\n    pingTask = try await batch.addRequest(Ping.request())\n    promptTask = try await batch.addRequest(\n        GetPrompt.request(.init(name: \"greeting\"))\n    )\n}\n\n// Process individual results\ndo {\n    if let pingTask = pingTask {\n        try await pingTask.value\n        print(\"Ping successful\")\n    }\n\n    if let promptTask = promptTask {\n        let promptResult = try await promptTask.value\n        print(\"Prompt: \\(promptResult.description ?? \"None\")\")\n    }\n} catch {\n    print(\"Error processing batch results: \\(error)\")\n}\n```\n\n\u003e [!NOTE]\n\u003e `Server` automatically handles batch requests from MCP clients.\n\n## Server Usage\n\nThe server component allows your application to host model capabilities and respond to client requests.\n\n### Basic Server Setup\n\n```swift\nimport MCP\n\n// Create a server with given capabilities\nlet server = Server(\n    name: \"MyModelServer\",\n    version: \"1.0.0\",\n    capabilities: .init(\n        prompts: .init(listChanged: true),\n        resources: .init(subscribe: true, listChanged: true),\n        tools: .init(listChanged: true)\n    )\n)\n\n// Create transport and start server\nlet transport = StdioTransport()\ntry await server.start(transport: transport)\n\n// Now register handlers for the capabilities you've enabled\n```\n\n### Tools\n\nRegister tool handlers to respond to client tool calls:\n\n```swift\n// Register a tool list handler\nawait server.withMethodHandler(ListTools.self) { _ in\n    let tools = [\n        Tool(\n            name: \"weather\",\n            description: \"Get current weather for a location\",\n            inputSchema: .object([\n                \"properties\": .object([\n                    \"location\": .string(\"City name or coordinates\"),\n                    \"units\": .string(\"Units of measurement, e.g., metric, imperial\")\n                ])\n            ])\n        ),\n        Tool(\n            name: \"calculator\",\n            description: \"Perform calculations\",\n            inputSchema: .object([\n                \"properties\": .object([\n                    \"expression\": .string(\"Mathematical expression to evaluate\")\n                ])\n            ])\n        )\n    ]\n    return .init(tools: tools)\n}\n\n// Register a tool call handler\nawait server.withMethodHandler(CallTool.self) { params in\n    switch params.name {\n    case \"weather\":\n        let location = params.arguments?[\"location\"]?.stringValue ?? \"Unknown\"\n        let units = params.arguments?[\"units\"]?.stringValue ?? \"metric\"\n        let weatherData = getWeatherData(location: location, units: units) // Your implementation\n        return .init(\n            content: [.text(\"Weather for \\(location): \\(weatherData.temperature)°, \\(weatherData.conditions)\")],\n            isError: false\n        )\n\n    case \"calculator\":\n        if let expression = params.arguments?[\"expression\"]?.stringValue {\n            let result = evaluateExpression(expression) // Your implementation\n            return .init(content: [.text(\"\\(result)\")], isError: false)\n        } else {\n            return .init(content: [.text(\"Missing expression parameter\")], isError: true)\n        }\n\n    default:\n        return .init(content: [.text(\"Unknown tool\")], isError: true)\n    }\n}\n```\n\n### Resources\n\nImplement resource handlers for data access:\n\n```swift\n// Register a resource list handler\nawait server.withMethodHandler(ListResources.self) { params in\n    let resources = [\n        Resource(\n            name: \"Knowledge Base Articles\",\n            uri: \"resource://knowledge-base/articles\",\n            description: \"Collection of support articles and documentation\"\n        ),\n        Resource(\n            name: \"System Status\",\n            uri: \"resource://system/status\",\n            description: \"Current system operational status\"\n        )\n    ]\n    return .init(resources: resources, nextCursor: nil)\n}\n\n// Register a resource read handler\nawait server.withMethodHandler(ReadResource.self) { params in\n    switch params.uri {\n    case \"resource://knowledge-base/articles\":\n        return .init(contents: [Resource.Content.text(\"# Knowledge Base\\n\\nThis is the content of the knowledge base...\", uri: params.uri)])\n\n    case \"resource://system/status\":\n        let status = getCurrentSystemStatus() // Your implementation\n        let statusJson = \"\"\"\n            {\n                \"status\": \"\\(status.overall)\",\n                \"components\": {\n                    \"database\": \"\\(status.database)\",\n                    \"api\": \"\\(status.api)\",\n                    \"model\": \"\\(status.model)\"\n                },\n                \"lastUpdated\": \"\\(status.timestamp)\"\n            }\n            \"\"\"\n        return .init(contents: [Resource.Content.text(statusJson, uri: params.uri, mimeType: \"application/json\")])\n\n    default:\n        throw MCPError.invalidParams(\"Unknown resource URI: \\(params.uri)\")\n    }\n}\n\n// Register a resource subscribe handler\nawait server.withMethodHandler(ResourceSubscribe.self) { params in\n    // Store subscription for later notifications.\n    // Client identity for multi-client scenarios needs to be managed by the server application,\n    // potentially using information from the initialize handshake if the server handles one client post-init.\n    // addSubscription(clientID: /* some_client_identifier */, uri: params.uri)\n    print(\"Client subscribed to \\(params.uri). Server needs to implement logic to track this subscription.\")\n    return .init()\n}\n```\n\n### Prompts\n\nImplement prompt handlers:\n\n```swift\n// Register a prompt list handler\nawait server.withMethodHandler(ListPrompts.self) { params in\n    let prompts = [\n        Prompt(\n            name: \"interview\",\n            description: \"Job interview conversation starter\",\n            arguments: [\n                .init(name: \"position\", description: \"Job position\", required: true),\n                .init(name: \"company\", description: \"Company name\", required: true),\n                .init(name: \"interviewee\", description: \"Candidate name\")\n            ]\n        ),\n        Prompt(\n            name: \"customer-support\",\n            description: \"Customer support conversation starter\",\n            arguments: [\n                .init(name: \"issue\", description: \"Customer issue\", required: true),\n                .init(name: \"product\", description: \"Product name\", required: true)\n            ]\n        )\n    ]\n    return .init(prompts: prompts, nextCursor: nil)\n}\n\n// Register a prompt get handler\nawait server.withMethodHandler(GetPrompt.self) { params in\n    switch params.name {\n    case \"interview\":\n        let position = params.arguments?[\"position\"]?.stringValue ?? \"Software Engineer\"\n        let company = params.arguments?[\"company\"]?.stringValue ?? \"Acme Corp\"\n        let interviewee = params.arguments?[\"interviewee\"]?.stringValue ?? \"Candidate\"\n\n        let description = \"Job interview for \\(position) position at \\(company)\"\n        let messages: [Prompt.Message] = [\n            .user(\"You are an interviewer for the \\(position) position at \\(company).\"),\n            .user(\"Hello, I'm \\(interviewee) and I'm here for the \\(position) interview.\"),\n            .assistant(\"Hi \\(interviewee), welcome to \\(company)! I'd like to start by asking about your background and experience.\")\n        ]\n\n        return .init(description: description, messages: messages)\n\n    case \"customer-support\":\n        // Similar implementation for customer support prompt\n\n    default:\n        throw MCPError.invalidParams(\"Unknown prompt name: \\(params.name)\")\n    }\n}\n```\n\n### Sampling\n\nServers can request LLM completions from clients through sampling. This enables agentic behaviors where servers can ask for AI assistance while maintaining human oversight.\n\n\u003e [!NOTE]\n\u003e The current implementation provides the correct API design for sampling, but requires bidirectional communication support in the transport layer. This feature will be fully functional when bidirectional transport support is added.\n\n```swift\n// Enable sampling capability in server\nlet server = Server(\n    name: \"MyModelServer\",\n    version: \"1.0.0\",\n    capabilities: .init(\n        sampling: .init(),  // Enable sampling capability\n        tools: .init(listChanged: true)\n    )\n)\n\n// Request sampling from the client (conceptual - requires bidirectional transport)\ndo {\n    let result = try await server.requestSampling(\n        messages: [\n            .user(\"Analyze this data and suggest next steps\")\n        ],\n        systemPrompt: \"You are a helpful data analyst\",\n        temperature: 0.7,\n        maxTokens: 150\n    )\n    \n    // Use the LLM completion in your server logic\n    print(\"LLM suggested: \\(result.content)\")\n    \n} catch {\n    print(\"Sampling request failed: \\(error)\")\n}\n```\n\nSampling enables powerful agentic workflows:\n- **Decision-making**: Ask the LLM to choose between options\n- **Content generation**: Request drafts for user approval\n- **Data analysis**: Get AI insights on complex data\n- **Multi-step reasoning**: Chain AI completions with tool calls\n\n#### Initialize Hook\n\nControl client connections with an initialize hook:\n\n```swift\n// Start the server with an initialize hook\ntry await server.start(transport: transport) { clientInfo, clientCapabilities in\n    // Validate client info\n    guard clientInfo.name != \"BlockedClient\" else {\n        throw MCPError.invalidRequest(\"This client is not allowed\")\n    }\n\n    // You can also inspect client capabilities\n    if clientCapabilities.sampling == nil {\n        print(\"Client does not support sampling\")\n    }\n\n    // Perform any server-side setup based on client info\n    print(\"Client \\(clientInfo.name) v\\(clientInfo.version) connected\")\n\n    // If the hook completes without throwing, initialization succeeds\n}\n```\n\n### Graceful Shutdown\n\nWe recommend using\n[Swift Service Lifecycle](https://github.com/swift-server/swift-service-lifecycle)\nfor managing startup and shutdown of services.\n\nFirst, add the dependency to your `Package.swift`:\n\n```swift\n.package(url: \"https://github.com/swift-server/swift-service-lifecycle.git\", from: \"2.3.0\"),\n```\n\nThen implement the MCP server as a `Service`:\n\n```swift\nimport MCP\nimport ServiceLifecycle\nimport Logging\n\nstruct MCPService: Service {\n    let server: Server\n    let transport: Transport\n\n    init(server: Server, transport: Transport) {\n        self.server = server\n        self.transport = transport\n    }\n\n    func run() async throws {\n        // Start the server\n        try await server.start(transport: transport)\n\n        // Keep running until external cancellation\n        try await Task.sleep(for: .days(365 * 100))  // Effectively forever\n    }\n\n    func shutdown() async throws {\n        // Gracefully shutdown the server\n        await server.stop()\n    }\n}\n```\n\nThen use it in your application:\n\n```swift\nimport MCP\nimport ServiceLifecycle\nimport Logging\n\nlet logger = Logger(label: \"com.example.mcp-server\")\n\n// Create the MCP server\nlet server = Server(\n    name: \"MyModelServer\",\n    version: \"1.0.0\",\n    capabilities: .init(\n        prompts: .init(listChanged: true),\n        resources: .init(subscribe: true, listChanged: true),\n        tools: .init(listChanged: true)\n    )\n)\n\n// Add handlers directly to the server\nawait server.withMethodHandler(ListTools.self) { _ in\n    // Your implementation\n    return .init(tools: [\n        Tool(name: \"example\", description: \"An example tool\")\n    ])\n}\n\nawait server.withMethodHandler(CallTool.self) { params in\n    // Your implementation\n    return .init(content: [.text(\"Tool result\")], isError: false)\n}\n\n// Create MCP service and other services\nlet transport = StdioTransport(logger: logger)\nlet mcpService = MCPService(server: server, transport: transport)\nlet databaseService = DatabaseService() // Your other services\n\n// Create service group with signal handling\nlet serviceGroup = ServiceGroup(\n    services: [mcpService, databaseService],\n    configuration: .init(\n        gracefulShutdownSignals: [.sigterm, .sigint]\n    ),\n    logger: logger\n)\n\n// Run the service group - this blocks until shutdown\ntry await serviceGroup.run()\n```\n\nThis approach has several benefits:\n\n- **Signal handling**:\n  Automatically traps SIGINT, SIGTERM and triggers graceful shutdown\n- **Graceful shutdown**:\n  Properly shuts down your MCP server and other services\n- **Timeout-based shutdown**:\n  Configurable shutdown timeouts to prevent hanging processes\n- **Advanced service management**:\n  [`ServiceLifecycle`](https://swiftpackageindex.com/swift-server/swift-service-lifecycle/documentation/servicelifecycle)\n  also supports service dependencies, conditional services,\n  and other useful features.\n\n## Transports\n\nMCP's transport layer handles communication between clients and servers.\nThe Swift SDK provides multiple built-in transports:\n\n| Transport | Description | Platforms | Best for |\n|-----------|-------------|-----------|----------|\n| [`StdioTransport`](/Sources/MCP/Base/Transports/StdioTransport.swift) | Implements [stdio transport](https://modelcontextprotocol.io/specification/2025-03-26/basic/transports#stdio) using standard input/output streams | Apple platforms, Linux with glibc | Local subprocesses, CLI tools |\n| [`HTTPClientTransport`](/Sources/MCP/Base/Transports/HTTPClientTransport.swift) | Implements [Streamable HTTP transport](https://modelcontextprotocol.io/specification/2025-03-26/basic/transports#streamable-http) using Foundation's URL Loading System | All platforms with Foundation | Remote servers, web applications |\n| [`InMemoryTransport`](/Sources/MCP/Base/Transports/InMemoryTransport.swift) | Custom in-memory transport for direct communication within the same process | All platforms | Testing, debugging, same-process client-server communication |\n| [`NetworkTransport`](/Sources/MCP/Base/Transports/NetworkTransport.swift) | Custom transport using Apple's Network framework for TCP/UDP connections | Apple platforms only | Low-level networking, custom protocols |\n\n### Custom Transport Implementation\n\nYou can implement a custom transport by conforming to the `Transport` protocol:\n\n```swift\nimport MCP\nimport Foundation\n\npublic actor MyCustomTransport: Transport {\n    public nonisolated let logger: Logger\n    private var isConnected = false\n    private let messageStream: AsyncThrowingStream\u003cData, any Swift.Error\u003e\n    private let messageContinuation: AsyncThrowingStream\u003cData, any Swift.Error\u003e.Continuation\n\n    public init(logger: Logger? = nil) {\n        self.logger = logger ?? Logger(label: \"my.custom.transport\")\n\n        var continuation: AsyncThrowingStream\u003cData, any Swift.Error\u003e.Continuation!\n        self.messageStream = AsyncThrowingStream { continuation = $0 }\n        self.messageContinuation = continuation\n    }\n\n    public func connect() async throws {\n        // Implement your connection logic\n        isConnected = true\n    }\n\n    public func disconnect() async {\n        // Implement your disconnection logic\n        isConnected = false\n        messageContinuation.finish()\n    }\n\n    public func send(_ data: Data) async throws {\n        // Implement your message sending logic\n    }\n\n    public func receive() -\u003e AsyncThrowingStream\u003cData, any Swift.Error\u003e {\n        return messageStream\n    }\n}\n```\n\n## Platform Availability\n\nThe Swift SDK has the following platform requirements:\n\n| Platform | Minimum Version |\n|----------|----------------|\n| macOS | 13.0+ |\n| iOS / Mac Catalyst | 16.0+ |\n| watchOS | 9.0+ |\n| tvOS | 16.0+ |\n| visionOS | 1.0+ |\n| Linux | Distributions with `glibc` or `musl`, including Ubuntu, Debian, Fedora, and Alpine Linux |\n\nWhile the core library works on any platform supporting Swift 6\n(including Linux and Windows),\nrunning a client or server requires a compatible transport.\n\nWe're working to add [Windows support](https://github.com/modelcontextprotocol/swift-sdk/pull/64).\n\n## Debugging and Logging\n\nEnable logging to help troubleshoot issues:\n\n```swift\nimport Logging\nimport MCP\n\n// Configure Logger\nLoggingSystem.bootstrap { label in\n    var handler = StreamLogHandler.standardOutput(label: label)\n    handler.logLevel = .debug\n    return handler\n}\n\n// Create logger\nlet logger = Logger(label: \"com.example.mcp\")\n\n// Pass to client/server\nlet client = Client(name: \"MyApp\", version: \"1.0.0\")\n\n// Pass to transport\nlet transport = StdioTransport(logger: logger)\n```\n\n## Additional Resources\n\n- [MCP Specification](https://modelcontextprotocol.io/specification/2025-03-26/)\n- [Protocol Documentation](https://modelcontextprotocol.io)\n- [GitHub Repository](https://github.com/modelcontextprotocol/swift-sdk)\n\n## Changelog\n\nThis project follows [Semantic Versioning](https://semver.org/).\nFor pre-1.0 releases,\nminor version increments (0.X.0) may contain breaking changes.\n\nFor details about changes in each release,\nsee the [GitHub Releases page](https://github.com/modelcontextprotocol/swift-sdk/releases).\n\n## License\n\nThis project is licensed under the MIT License.\n\n[mcp]: https://modelcontextprotocol.io\n[mcp-spec-2025-03-26]: https://modelcontextprotocol.io/specification/2025-03-26","funding_links":[],"categories":["SDKs","HarmonyOS","Swift","📚 Projects (1974 total)","MCP Frameworks and libraries","Mcp Server Directories \u0026 Lists","📦 Other"],"sub_categories":["Swift","Windows Manager","MCP Servers"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmodelcontextprotocol%2Fswift-sdk","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmodelcontextprotocol%2Fswift-sdk","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmodelcontextprotocol%2Fswift-sdk/lists"}