{"id":26706058,"url":"https://github.com/loopwork-ai/ollama-swift","last_synced_at":"2025-04-09T05:07:58.914Z","repository":{"id":259027981,"uuid":"752672224","full_name":"loopwork-ai/ollama-swift","owner":"loopwork-ai","description":"A Swift client library for interacting with Ollama. Supports structured outputs, tool use, and vision models.","archived":false,"fork":false,"pushed_at":"2025-03-21T22:03:24.000Z","size":88,"stargazers_count":330,"open_issues_count":2,"forks_count":15,"subscribers_count":8,"default_branch":"main","last_synced_at":"2025-03-21T23:19:19.968Z","etag":null,"topics":["ollama","swift"],"latest_commit_sha":null,"homepage":"https://ollama.com","language":"Swift","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/loopwork-ai.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-02-04T13:45:01.000Z","updated_at":"2025-03-21T22:03:21.000Z","dependencies_parsed_at":"2025-03-21T23:19:25.822Z","dependency_job_id":"4168d21f-8d72-4ff5-9123-76b5ba34e8f9","html_url":"https://github.com/loopwork-ai/ollama-swift","commit_stats":null,"previous_names":["mattt/ollama-swift","loopwork-ai/ollama-swift"],"tags_count":4,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/loopwork-ai%2Follama-swift","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/loopwork-ai%2Follama-swift/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/loopwork-ai%2Follama-swift/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/loopwork-ai%2Follama-swift/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/loopwork-ai","download_url":"https://codeload.github.com/loopwork-ai/ollama-swift/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247980836,"owners_count":21027808,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ollama","swift"],"created_at":"2025-03-27T06:00:44.994Z","updated_at":"2025-04-09T05:07:58.891Z","avatar_url":"https://github.com/loopwork-ai.png","language":"Swift","readme":"# Ollama Swift Client\n\nA Swift client library for interacting with the\n[Ollama API](https://github.com/ollama/ollama/blob/main/docs/api.md).\n\n## Requirements\n\n- Swift 5.7+\n- macOS 13+\n- [Ollama](https://ollama.com)\n\n## Installation\n\n### Swift Package Manager\n\nAdd the following to your `Package.swift` file:\n\n```swift\n.package(url: \"https://github.com/loopwork-ai/ollama-swift.git\", from: \"1.3.0\")\n```\n\n## Usage\n\n\u003e [!NOTE]\n\u003e The tests and example code for this library use the\n\u003e [llama3.2](https://ollama.com/library/llama3.2) model.\n\u003e Run the following command to download the model to run them yourself:\n\u003e\n\u003e ```\n\u003e ollama pull llama3.2\n\u003e ```\n\n### Initializing the client\n\n```swift\nimport Ollama\n\n// Use the default client (http://localhost:11434)\nlet client = Client.default\n\n// Or create a custom client\nlet customClient = Client(host: URL(string: \"http://your-ollama-host:11434\")!, userAgent: \"MyApp/1.0\")\n```\n\n### Generating text\n\nGenerate text using a specified model:\n\n```swift\ndo {\n    let response = try await client.generate(\n        model: \"llama3.2\",\n        prompt: \"Tell me a joke about Swift programming.\",\n        options: [\n            \"temperature\": 0.7,\n            \"max_tokens\": 100\n        ]\n    )\n    print(response.response)\n} catch {\n    print(\"Error: \\(error)\")\n}\n```\n\n### Chatting with a model\n\nGenerate a chat completion:\n\n```swift\ndo {\n    let response = try await client.chat(\n        model: \"llama3.2\",\n        messages: [\n            .system(\"You are a helpful assistant.\"),\n            .user(\"In which city is Apple Inc. located?\")\n        ]\n    )\n    print(response.message.content)\n} catch {\n    print(\"Error: \\(error)\")\n}\n```\n\n### Using Structured Outputs\n\nYou can request structured outputs from models by specifying a format. \nPass `\"json\"` to get back a JSON string,\nor specify a full [JSON Schema](https://json-schema.org):\n\n```swift\n// Simple JSON format\nlet response = try await client.chat(\n    model: \"llama3.2\",\n    messages: [.user(\"List 3 colors.\")],\n    format: \"json\"\n)\n\n// Using JSON schema for more control\nlet schema: Value = [\n    \"type\": \"object\",\n    \"properties\": [\n        \"colors\": [\n            \"type\": \"array\",\n            \"items\": [\n                \"type\": \"object\",\n                \"properties\": [\n                    \"name\": [\"type\": \"string\"],\n                    \"hex\": [\"type\": \"string\"]\n                ],\n                \"required\": [\"name\", \"hex\"]\n            ]\n        ]\n    ],\n    \"required\": [\"colors\"]\n]\n\nlet response = try await client.chat(\n    model: \"llama3.2\",\n    messages: [.user(\"List 3 colors with their hex codes.\")],\n    format: schema\n)\n\n// The response will be a JSON object matching the schema:\n// {\n//   \"colors\": [\n//     {\"name\": \"papayawhip\", \"hex\": \"#FFEFD5\"},\n//     {\"name\": \"indigo\", \"hex\": \"#4B0082\"},\n//     {\"name\": \"navy\", \"hex\": \"#000080\"}\n//   ]\n// }\n```\n\nThe format parameter works with both `chat` and `generate` methods.\n\n### Using Tools\n\nOllama supports tool calling with models,\nallowing models to perform complex tasks or interact with external services.\n\n\u003e [!NOTE]\n\u003e Tool support requires a [compatible model](https://ollama.com/search?c=tools),\n\u003e such as llama3.2.\n\n#### Creating a Tool\n\nDefine a tool by specifying its name, description, parameters, and implementation:\n\n```swift\nstruct WeatherInput: Codable {\n    let city: String\n}\n\nstruct WeatherOutput: Codable {\n    let temperature: Double\n    let conditions: String\n}\n\nlet weatherTool = Tool\u003cWeatherInput, WeatherOutput\u003e(\n    name: \"get_current_weather\",\n    description: \"\"\"\n    Get the current weather for a city, \n    with conditions (\"sunny\", \"cloudy\", etc.)\n    and temperature in °C.\n    \"\"\",\n    parameters: [\n        \"city\": [\n            \"type\": \"string\",\n            \"description\": \"The city to get weather for\"\n        ]\n    ],\n    required: [\"city\"]\n) { input async throws -\u003e WeatherOutput in\n    // Implement weather lookup logic here\n    return WeatherOutput(temperature: 18.5, conditions: \"cloudy\")\n}\n```\n\n\u003e [!IMPORTANT]\n\u003e In version 1.3.0 and later, \n\u003e the `parameters` argument should contain only the properties object, \n\u003e not the full JSON schema of the tool. \n\u003e \n\u003e For backward compatibility, \n\u003e passing a full schema in the `parameters` argument \n\u003e (with `\"type\"`, `\"properties\"`, and `\"required\"` fields) \n\u003e is still supported but deprecated and will emit a warning in debug builds.\n\u003e\n\u003e \u003cdetails\u003e\n\u003e \u003csummary\u003eClick to see code examples of old vs. new format\u003c/summary\u003e\n\u003e\n\u003e ```swift\n\u003e // ✅ New format\n\u003e let weatherTool = Tool\u003cWeatherInput, WeatherOutput\u003e(\n\u003e     name: \"get_current_weather\",\n\u003e     description: \"Get the current weather for a city\",\n\u003e     parameters: [\n\u003e         \"city\": [\n\u003e             \"type\": \"string\",\n\u003e             \"description\": \"The city to get weather for\"\n\u003e         ]\n\u003e     ],\n\u003e     required: [\"city\"]\n\u003e ) { /* implementation */ }\n\n\u003e // ❌ Deprecated format (still works but not recommended)\n\u003e let weatherTool = Tool\u003cWeatherInput, WeatherOutput\u003e(\n\u003e     name: \"get_current_weather\",\n\u003e     description: \"Get the current weather for a city\",\n\u003e     parameters: [\n\u003e         \"type\": \"object\",\n\u003e         \"properties\": [\n\u003e             \"city\": [\n\u003e                 \"type\": \"string\",\n\u003e                 \"description\": \"The city to get weather for\"\n\u003e             ]\n\u003e         ],\n\u003e         \"required\": [\"city\"]\n\u003e     ]\n\u003e ) { /* implementation */ }\n\u003e ```\n\u003e \u003c/details\u003e\n\n#### Using Tools in Chat\n\nProvide tools to the model during chat:\n\n```swift\nlet messages: [Chat.Message] = [\n    .system(\"You are a helpful assistant that can check the weather.\"),\n    .user(\"What's the weather like in Portland?\")\n]\n\nlet response = try await client.chat(\n    model: \"llama3.1\",\n    messages: messages,\n    tools: [weatherTool]\n)\n\n// Handle tool calls in the response\nif let toolCalls = response.message.toolCalls {\n    for toolCall in toolCalls {\n        print(\"Tool called: \\(toolCall.function.name)\")\n        print(\"Arguments: \\(toolCall.function.arguments)\")\n    }\n}\n```\n\n#### Multi-turn Tool Conversations\n\nTools can be used in multi-turn conversations, where the model can use tool results to provide more detailed responses:\n\n```swift\nvar messages: [Chat.Message] = [\n    .system(\"You are a helpful assistant that can convert colors.\"),\n    .user(\"What's the hex code for yellow?\")\n]\n\n// First turn - model calls the tool\nlet response1 = try await client.chat(\n    model: \"llama3.1\",\n    messages: messages,\n    tools: [rgbToHexTool]\n)\n\nenum ToolError {\n    case invalidParameters\n}\n\n// Add tool response to conversation\nif let toolCall = response1.message.toolCalls?.first {\n    // Parse the tool arguments\n    guard let args = toolCall.function.arguments,\n          let red = Double(redStr, strict: false),\n          let green = Double(greenStr, strict: false),\n          let blue = Double(blueStr, strict: false) \n    else {\n        throw ToolError.invalidParameters\n    }\n    \n    let input = HexColorInput(\n        red: red,\n        green: green,\n        blue: blue\n    )\n    \n    // Execute the tool with the input\n    let hexColor = try await rgbToHexTool(input)\n    \n    // Add the tool result to the conversation\n    messages.append(.tool(hexColor))\n}\n\n// Continue conversation with tool result\nmessages.append(.user(\"What other colors are similar?\"))\nlet response2 = try await client.chat(\n    model: \"llama3.1\",\n    messages: messages,\n    tools: [rgbToHexTool]\n)\n```\n\n### Generating embeddings\n\nGenerate embeddings for a given text:\n\n```swift\ndo {\n    let embeddings = try await client.embed(\n        model: \"llama3.2\",\n        input: \"Here is an article about llamas...\"\n    )\n    print(\"Embeddings: \\(embeddings)\")\n} catch {\n    print(\"Error: \\(error)\")\n}\n```\n\n### Managing models\n\n#### Listing models\n\nList available models:\n\n```swift\ndo {\n    let models = try await client.listModels()\n    for model in models {\n        print(\"Model: \\(model.name), Modified: \\(model.modifiedAt)\")\n    }\n} catch {\n    print(\"Error: \\(error)\")\n}\n```\n\n#### Retrieving model information\n\nGet detailed information about a specific model:\n\n```swift\ndo {\n    let modelInfo = try await client.showModel(\"llama3.2\")\n    print(\"Modelfile: \\(modelInfo.modelfile)\")\n    print(\"Parameters: \\(modelInfo.parameters)\")\n    print(\"Template: \\(modelInfo.template)\")\n} catch {\n    print(\"Error: \\(error)\")\n}\n```\n\n#### Pulling a model\n\nDownload a model from the Ollama library:\n\n```swift\ndo {\n    let success = try await client.pullModel(\"llama3.2\")\n    if success {\n        print(\"Model successfully pulled\")\n    } else {\n        print(\"Failed to pull model\")\n    }\n} catch {\n    print(\"Error: \\(error)\")\n}\n```\n\n#### Pushing a model\n\n```swift\ndo {\n    let success = try await client.pushModel(\"mynamespace/mymodel:latest\")\n    if success {\n        print(\"Model successfully pushed\")\n    } else {\n        print(\"Failed to push model\")\n    }\n} catch {\n    print(\"Error: \\(error)\")\n}\n```\n","funding_links":[],"categories":["Swift"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Floopwork-ai%2Follama-swift","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Floopwork-ai%2Follama-swift","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Floopwork-ai%2Follama-swift/lists"}