{"id":28458087,"url":"https://github.com/getgrinta/swift-llm","last_synced_at":"2025-07-02T05:31:27.064Z","repository":{"id":295262101,"uuid":"988513780","full_name":"getgrinta/swift-llm","owner":"getgrinta","description":"Modern Swift LLM SDK with support for AI tools","archived":false,"fork":false,"pushed_at":"2025-05-25T13:47:27.000Z","size":7860,"stargazers_count":2,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-06-07T00:09:30.405Z","etag":null,"topics":["ai","ai-tools","llm","sdk","swift"],"latest_commit_sha":null,"homepage":"","language":"Swift","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/getgrinta.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2025-05-22T16:53:22.000Z","updated_at":"2025-05-23T10:38:55.000Z","dependencies_parsed_at":"2025-05-24T15:22:59.394Z","dependency_job_id":"3cf9eaaa-8bbd-44e1-8a97-a128b89374fe","html_url":"https://github.com/getgrinta/swift-llm","commit_stats":null,"previous_names":["getgrinta/swift-llm"],"tags_count":12,"template":false,"template_full_name":null,"purl":"pkg:github/getgrinta/swift-llm","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/getgrinta%2Fswift-llm","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/getgrinta%2Fswift-llm/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/getgrinta%2Fswift-llm/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/getgrinta%2Fswift-llm/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/getgrinta","download_url":"https://codeload.github.com/getgrinta/swift-llm/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/getgrinta%2Fswift-llm/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":263081264,"owners_count":23410839,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","ai-tools","llm","sdk","swift"],"created_at":"2025-06-07T00:09:35.335Z","updated_at":"2025-07-02T05:31:27.046Z","avatar_url":"https://github.com/getgrinta.png","language":"Swift","readme":" # swift-llm \n\n**Simple Swift library for interacting with Large Language Models (LLMs), featuring support for streaming responses and tool integration.**\n\n[![Swift Tests](https://github.com/getgrinta/swift-llm/actions/workflows/swift-test.yml/badge.svg)](https://github.com/getgrinta/swift-llm/actions/workflows/swift-test.yml)\n[![Swift Version](https://img.shields.io/badge/Swift-6.0+-orange.svg)](https://swift.org)\n[![Platform](https://img.shields.io/badge/platform-iOS%2013+-blue.svg)](https://developer.apple.com/ios/)\n[![SwiftPM compatible](https://img.shields.io/badge/SwiftPM-compatible-brightgreen.svg)](https://swift.org/package-manager/)\n[![GitHub release](https://img.shields.io/github/v/release/getgrinta/swift-llm.svg)](https://GitHub.com/getgrinta/swift-llm/releases/)\n[![License](https://img.shields.io/badge/License-Apache%202.0-lightgrey.svg)](LICENSE)\n\n![swift-llm hero image](public/library.gif)\n\n`swift-llm` provides a modern, async/await-based interface to communicate with LLM APIs, making it easy to integrate advanced AI capabilities into your Swift applications.\n\n\n## Features\n\n- **Core LLM Interaction**: Send requests and receive responses from LLMs using `ChatMessage` arrays for rich conversational context.\n- **Streaming Support**: Handle real-time streaming of LLM responses for a more interactive user experience (`AsyncThrowingStream`).\n- **Multi-Standard SSE Parsing**: Supports Server-Sent Events (SSE) streams conforming to both OpenAI (`openAi`) and Vercel AI SDK (`vercel`) standards via the `SSETokenizer`.\n- **Tool Integration**: Empower your LLM to use predefined tools to perform actions in parallel and gather information, enabling more complex and capable assistants (`TooledLLMClient`).\n- **Customizable Requests**: Modify outgoing `URLRequest` objects using a `requestTransformer` closure, allowing for custom headers, body modifications, or different endpoints for stream/non-stream calls.\n- **Typed Models**: Clear, `Codable` Swift structures for requests, responses, and tool definitions.\n- **Modern Swift**: Built with Swift 6.1+ and leverages modern concurrency features.\n- **Easy Integration**: Designed as a Swift Package Manager library.\n\n## Requirements\n\n- Swift 6.0 or later\n- iOS 13.0 or later\n\n## Installation\n\nAdd `swift-llm` as a dependency to your `Package.swift` file:\n\n```swift\nimport PackageDescription\n\nlet package = Package(\n    name: \"YourProjectName\",\n    platforms: [.iOS(.v13)],\n    dependencies: [\n        .package(url: \"https://github.com/getgrinta/swift-llm.git\", from: \"0.1.4\")\n    ],\n    targets: [\n        .target(\n            name: \"YourProjectTarget\",\n            dependencies: [\"swift-llm\"]\n        )\n    ]\n)\n```\n\n## Usage\n\n### 1. Basic Chat\n\n```swift\nimport SwiftLLM\n\nlet endpoint = \"your_llm_api_endpoint\"\nlet modelName = \"your_model_name\"\nlet bearerToken = \"your_api_key\"\n\n// Construct messages - an array of ChatMessage objects\n// Assuming ChatMessage(role: .user, content: \"...\")\nlet messages = [ChatMessage(role: .user, content: \"What is the capital of France?\")]\n\n// Initialize the LLMClient with your endpoint, model, and API key\n// Default standard is .openAi. For Vercel, specify: SSETokenizer.Standard.vercel\nlet llmClient = LLMClient(endpoint: endpoint, model: modelName, apiKey: bearerToken)\n\nTask {\n    do {\n        print(\"Sending request...\")\n        // Send messages and optionally set temperature (e.g., 0.7 for some creativity)\n        let response = try await llmClient.send(messages: messages, temperature: 0.7)\n        print(\"LLM Response: \\(response.message)\")\n    } catch {\n        print(\"Error during non-streaming chat: \\(error.localizedDescription)\")\n    }\n}\n```\n\n### 2. Streaming\n\n```swift\nimport SwiftLLM\n\n// Initialize the LLMClient with your endpoint, model, and API key\nlet llmClient = LLMClient(endpoint: endpoint, model: modelName, apiKey: bearerToken)\nlet endpoint = \"your_llm_api_endpoint\"\nlet modelName = \"your_model_name\"\nlet bearerToken = \"your_api_key\"\n\nlet messages = [ChatMessage(role: .user, content: \"Tell me a short story, stream it part by part.\")]\n\nTask {\n    do {\n        // Stream messages and optionally set temperature\n        let stream = try await llmClient.stream(messages: messages, temperature: 0.7)\n\n        print(\"Streaming response:\")\n        for try await chunk in stream {\n            print(chunk.message, terminator: \"\") // Append chunks as they arrive\n        }\n        print() // Newline after stream finishes\n    } catch {\n        print(\"Error during streaming chat: \\(error.localizedDescription)\")\n    }\n}\n```\n\n### 3. Using `TooledLLMClient`\n\nFirst, define your tools:\n\n```swift\nimport SwiftLLM\n\n// Example Tool: Get Current Weather\nlet weatherTool = LLMTool(\n    name: \"getCurrentWeather\",\n    description: \"Gets the current weather for a given location. Arguments should be a plain string with the location name.\",\n    execute: { argumentsAsLocationString in\n        // In a real scenario, you might parse 'arguments' if it's structured (e.g., XML/JSON)\n        // For this example, assume 'argumentsAsLocationString' is the location directly.\n        print(\"Tool 'getCurrentWeather' called with arguments: \\(argumentsAsLocationString)\")\n        \n        // Simulate API call or logic\n        if argumentsAsLocationString.lowercased().contains(\"paris\") {\n            return \"The weather in Paris is sunny, 25°C.\"\n        } else if argumentsAsLocationString.lowercased().contains(\"london\") {\n            return \"It's currently cloudy with a chance of rain in London, 18°C.\"\n        } else {\n            return \"Sorry, I don't know the weather for \\(argumentsAsLocationString).\"\n        }\n    }\n)\n\nlet tools = [weatherTool]\n```\n\nThen, use `TooledLLMClient`:\n\n```swift\nimport SwiftLLM\n\n// Initialize the LLMClient first (used by TooledLLMClient)\n// The LLMClient itself now uses ChatMessage and supports temperature, \n// though TooledLLMClient might manage this internally for its specific flow.\nlet llmClient = LLMClient(endpoint: endpoint, model: modelName, apiKey: bearerToken)\nlet tooledClient = TooledLLMClient(llmClient: llmClient) // Pass the LLMClient instance\n\nlet endpoint = \"your_llm_api_endpoint_supporting_tools\" // Ensure this endpoint supports tool use\nlet modelName = \"your_tool_capable_model_name\"\nlet bearerToken = \"your_api_key\"\nlet userInput = \"What's the weather like in Paris today?\"\n\nTask {\n    do {\n        print(\"User Input: \\(userInput)\")\n        let stream = try await tooledClient.processWithTools(\n            userInput: userInput,\n            tools: tools\n        )\n\n        print(\"\\nFinal LLM Response (after potential tool use):\")\n        var fullResponse = \"\"\n        for try await chunk in stream {\n            print(chunk.message, terminator: \"\")\n            fullResponse += chunk.message\n        }\n       \n        print(\"\\n--- Full Assembled Response ---\")\n        print(fullResponse)\n    } catch let error as TooledLLMClientError {\n        print(\"TooledLLMClientError: \\(error)\")\n    } catch {\n        print(\"An unexpected error occurred: \\(error.localizedDescription)\")\n    }\n}\n```\n\n## Core Components\n\n### `LLMClient`\nThe primary client for all interactions with an LLM, supporting both non-streaming (single request/response) and streaming (continuous updates) communication. It handles the direct network requests to the LLM API. Interactions are based on `ChatMessage` arrays, allowing for conversational history to be passed to the LLM. It also supports a `temperature` parameter to control response randomness.\n\n**`ChatMessage` Structure (Conceptual):**\nYour `ChatMessage` objects would typically include a `role` (e.g., `.user`, `.assistant`, `.system`) and `content` (the text of the message).\n\n**Initializer:**\n`public init(standard: SSETokenizer.Standard = .openAi, endpoint: String, model: String, apiKey: String, sessionConfiguration: URLSessionConfiguration = .default, requestTransformer: (@Sendable (URLRequest, _ isStream: Bool) -\u003e URLRequest)? = nil)`\n\n- `standard`: (Optional) The SSE parsing standard to use. Defaults to `.openAi`. Can be set to `.vercel` for Vercel AI SDK compatibility.\n- `endpoint`: The base URL for the LLM API.\n- `model`: The identifier for the LLM model to be used.\n- `apiKey`: Your API key for authentication.\n- `sessionConfiguration`: (Optional) A `URLSessionConfiguration` for the underlying network session. Defaults to `.default`.\n- `requestTransformer`: (Optional) A closure that allows you to modify the `URLRequest` before it's sent.\n\n**Example of `requestTransformer`:**\n```swift\nlet transformer: @Sendable (URLRequest, Bool) -\u003e URLRequest = { request, isStream in\n    var mutableRequest = request\n    // Add a custom header\n    mutableRequest.setValue(\"my-custom-value\", forHTTPHeaderField: \"X-Custom-Header\")\n    \n    // Potentially change endpoint based on stream type\n    if isStream {\n        // mutableRequest.url = URL(string: \"your_streaming_specific_endpoint\")\n    } else {\n        // mutableRequest.url = URL(string: \"your_non_streaming_specific_endpoint\")\n    }\n    return mutableRequest\n}\n\nlet llmClient = LLMClient(\n    standard: .openAi,\n    endpoint: vercelEndpoint,\n    model: vercelModelName,\n    apiKey: vercelBearerToken,\n    requestTransformer: transformer\n)\n```\n\n**Key methods:**\n- `public func send(messages: [ChatMessage], temperature: Double? = nil) async throws -\u003e ChatOutput` (for non-streaming requests)\n- `public func stream(messages: [ChatMessage], temperature: Double? = nil) -\u003e AsyncThrowingStream\u003cChatOutput, Error\u003e` (for setting up a streaming connection)\n\n### `TooledLLMClient`\nManages interactions with an LLM that can utilize a predefined set of tools. It orchestrates a multi-pass conversation:\n1.  Sends user input (often initially as a string, which it converts to `ChatMessage` for the LLM) and tool descriptions to the LLM.\n2.  Parses the LLM's decision and executes the identified tools.\n3.  Sends the tool execution results back to the LLM (as `ChatMessage` objects) to generate a final, user-facing response.\n\n**Initializer:**\n`public init(llmClient: LLMClient)`\n\n**Key method:**\n- `public func processWithTools(userInput: String, tools: [LLMTool]) async throws -\u003e AsyncThrowingStream\u003cChatOutput, Error\u003e`\n\n**Note on Tool Argument Formatting:**\n\nThe `TooledLLMClient` includes a default prompt that instructs the LLM to provide arguments like `{\"toolsToUse\": [{\"name\": \"tool_name\", \"arguments\": \"\u003ctool_specific_xml_args_or_empty_string\u003e\"}]}`. The `arguments` field from this JSON is what gets passed to your tool's `execute` closure. You'll need to:\n1.  Ensure the LLM you use can follow this JSON instruction for its `tool_calls` response.\n2.  Adapt your tool's `execute` closure to parse the `arguments` string as needed (e.g., if it's plain text, XML, or a JSON string itself). The example above simplifies this for clarity.\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request or open an Issue if you find a bug or have a feature request.\n\n(Optional: Add guidelines for commit messages, code style, running tests, etc.)\n\n## License\n\n`swift-llm` is released under the Apache License 2.0. See [LICENSE](LICENSE) file for details.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fgetgrinta%2Fswift-llm","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fgetgrinta%2Fswift-llm","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fgetgrinta%2Fswift-llm/lists"}