{"id":47351538,"url":"https://github.com/zendev-sh/goai","last_synced_at":"2026-04-12T11:07:03.145Z","repository":{"id":345116862,"uuid":"1184728602","full_name":"zendev-sh/goai","owner":"zendev-sh","description":"Go AI SDK, the Go way.  One unified API across 21+ providers. Streaming, structured output, MCP support, stdlib only. Go AI SDK for AI applications inspired by Vercel AI SDK.","archived":false,"fork":false,"pushed_at":"2026-04-03T00:39:40.000Z","size":16567,"stargazers_count":19,"open_issues_count":1,"forks_count":4,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-04-03T00:40:42.983Z","etag":null,"topics":["ai","ai-agents","ai-sdk","ai-tools","anthropic","azure-op","bedro","cerebra","gemini","go-ai-sdk","llm","minimax","openai","openrouter","sdk-go","vercel","vertex-ai"],"latest_commit_sha":null,"homepage":"https://goai.sh/","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/zendev-sh.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":"ROADMAP.md","authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":"AGENTS.md","dco":null,"cla":null}},"created_at":"2026-03-17T21:57:16.000Z","updated_at":"2026-04-02T09:22:03.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/zendev-sh/goai","commit_stats":null,"previous_names":["zendev-sh/goai"],"tags_count":16,"template":false,"template_full_name":null,"purl":"pkg:github/zendev-sh/goai","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zendev-sh%2Fgoai","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zendev-sh%2Fgoai/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zendev-sh%2Fgoai/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zendev-sh%2Fgoai/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/zendev-sh","download_url":"https://codeload.github.com/zendev-sh/goai/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zendev-sh%2Fgoai/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31398792,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-04T10:20:44.708Z","status":"ssl_error","status_checked_at":"2026-04-04T10:20:06.846Z","response_time":60,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","ai-agents","ai-sdk","ai-tools","anthropic","azure-op","bedro","cerebra","gemini","go-ai-sdk","llm","minimax","openai","openrouter","sdk-go","vercel","vertex-ai"],"created_at":"2026-03-18T00:04:58.358Z","updated_at":"2026-04-12T11:07:03.132Z","avatar_url":"https://github.com/zendev-sh.png","language":"Go","readme":"\u003cp align=\"center\"\u003e\n  \u003cimg src=\"goai.png\" alt=\"GoAI\" width=\"400\"\u003e\n\u003c/p\u003e\n\n\u003ch1 align=\"center\"\u003eGoAI\u003c/h1\u003e\n\n\u003cp align=\"center\"\u003e\u003cem\u003eAI SDK, the Go way.\u003c/em\u003e\u003c/p\u003e\n\u003cp align=\"center\"\u003eGo SDK for building AI applications. One SDK, 22+ providers, MCP support.\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"bench/RESULTS.md\"\u003e\u003cimg src=\"https://img.shields.io/badge/streaming-1.1x_faster-brightgreen\" alt=\"Streaming\"\u003e\u003c/a\u003e\n  \u003ca href=\"bench/RESULTS.md\"\u003e\u003cimg src=\"https://img.shields.io/badge/cold_start-24x_faster-brightgreen\" alt=\"Cold Start\"\u003e\u003c/a\u003e\n  \u003ca href=\"bench/RESULTS.md\"\u003e\u003cimg src=\"https://img.shields.io/badge/memory-3.1x_less-brightgreen\" alt=\"Memory\"\u003e\u003c/a\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\u003cstrong\u003e1.1x faster streaming\u003c/strong\u003e, \u003cstrong\u003e24x faster cold start\u003c/strong\u003e, \u003cstrong\u003e3.1x less memory\u003c/strong\u003e vs Vercel AI SDK (\u003ca href=\"bench/RESULTS.md\"\u003ebenchmarks\u003c/a\u003e)\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"https://goai.sh\"\u003eWebsite\u003c/a\u003e \u0026middot;\n  \u003ca href=\"https://goai.sh/getting-started/installation\"\u003eDocs\u003c/a\u003e \u0026middot;\n  \u003ca href=\"https://goai.sh/architecture\"\u003eArchitecture\u003c/a\u003e \u0026middot;\n  \u003ca href=\"https://goai.sh/providers/\"\u003eProviders\u003c/a\u003e \u0026middot;\n  \u003ca href=\"https://goai.sh/examples\"\u003eExamples\u003c/a\u003e\n\u003c/p\u003e\n\n---\n\nInspired by the [Vercel AI SDK](https://sdk.vercel.ai). The same clean abstractions, idiomatically adapted for Go with generics, interfaces, and functional options.\n\n## What's New\n\n\u003e **v0.6.0** - OpenTelemetry tracing + metrics, context propagation via RequestInfo.Ctx, Langfuse data race fix. [Changelog →](https://github.com/zendev-sh/goai/releases)\n\u003e\n\u003e **v0.5.8** - RunPod provider, Bedrock embeddings, and docs accuracy improvements. [Changelog →](https://github.com/zendev-sh/goai/releases)\n\u003e\n\u003e **v0.5.1** - MCP (Model Context Protocol) client plus MiniMax provider support. [Docs →](https://goai.sh/concepts/mcp)\n\n## Features\n\n- **7 core functions**: `GenerateText`, `StreamText`, `GenerateObject[T]`, `StreamObject[T]`, `Embed`, `EmbedMany`, `GenerateImage`\n- **22+ providers**: OpenAI, Anthropic, Google, Bedrock, Azure, Vertex, Mistral, xAI, Groq, Cohere, DeepSeek, MiniMax, Fireworks, Together, DeepInfra, OpenRouter, Perplexity, Cerebras, Ollama, vLLM, RunPod, + generic OpenAI-compatible\n- **Auto tool loop**: Define tools with `Execute` handlers, set `MaxSteps` for `GenerateText` and `StreamText`\n- **Structured output**: `GenerateObject[T]` auto-generates JSON Schema from Go types via reflection\n- **Streaming**: Real-time text and partial object streaming via channels\n- **Dynamic auth**: `TokenSource` interface for OAuth, rotating keys, cloud IAM, with `CachedTokenSource` for TTL-based caching\n- **Prompt caching**: Automatic cache control for supported providers (Anthropic, Bedrock, MiniMax)\n- **Citations/sources**: Grounding and inline citations from xAI, Perplexity, Google, OpenAI\n- **Web search**: Built-in web search tools for OpenAI, Anthropic, Google, Groq. Model decides when to search\n- **Code execution**: Server-side Python sandboxes via OpenAI, Anthropic, Google. No local setup\n- **Computer use**: Anthropic computer, bash, text editor tools for autonomous desktop interaction\n- **20 provider-defined tools**: Web fetch, file search, image generation, X search, and more - [full list](#provider-defined-tools)\n- **MCP client**: Connect to any MCP server (stdio, HTTP, SSE), auto-convert tools for use with GoAI\n- **Observability**: Built-in Langfuse and OpenTelemetry (OTel) integrations for tracing generations, tools, and multi-step loops\n- **9 lifecycle hooks**: Observability (`OnRequest`, `OnResponse`, `OnToolCallStart`, `OnToolCall`, `OnStepFinish`, `OnFinish`) and interceptor (`OnBeforeToolExecute`, `OnAfterToolExecute`, `OnBeforeStep`) hooks for permission gates, secret scanning, output transformation, and loop control\n- **Retry/backoff**: Automatic retry with exponential backoff on retryable HTTP errors (429/5xx)\n- **Minimal dependencies**: Core depends on `golang.org/x/oauth2` + one indirect (`cloud.google.com/go/compute/metadata`). Optional `observability/otel` submodule uses separate `go.mod` with OTel SDK.\n\n## Performance vs Vercel AI SDK\n\n| Metric               | GoAI   | Vercel AI SDK | Improvement |\n| -------------------- | ------ | ------------- | ----------- |\n| Streaming throughput | 1.46ms | 1.62ms        | 1.1x faster |\n| Cold start           | 569us  | 13.9ms        | 24x faster  |\n| Memory (1 stream)    | 220KB  | 676KB         | 3.1x less   |\n| GenerateText         | 56us   | 79us          | 1.4x faster |\n\n\u003e Mock HTTP server, identical SSE fixtures, Apple M2. [Full report](bench/RESULTS.md)\n\n## Install\n\n```bash\ngo get github.com/zendev-sh/goai@latest\n```\n\nRequires Go 1.25+.\n\n## Quick Start\n\nMost hosted providers auto-resolve API keys from environment variables. Local/custom providers may require explicit options:\n\n```go\npackage main\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"log\"\n\n\t\"github.com/zendev-sh/goai\"\n\t\"github.com/zendev-sh/goai/provider/openai\"\n)\n\nfunc main() {\n\t// Reads OPENAI_API_KEY from environment automatically.\n\tmodel := openai.Chat(\"gpt-4o\")\n\n\tresult, err := goai.GenerateText(context.Background(), model,\n\t\tgoai.WithPrompt(\"What is the capital of France?\"),\n\t)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\tfmt.Println(result.Text)\n}\n```\n\n## Streaming\n\n```go\nctx := context.Background()\n\nstream, err := goai.StreamText(ctx, model,\n\tgoai.WithSystem(\"You are a helpful assistant.\"),\n\tgoai.WithPrompt(\"Write a haiku about Go.\"),\n)\nif err != nil {\n\tlog.Fatal(err)\n}\n\nfor text := range stream.TextStream() {\n\tfmt.Print(text)\n}\n\nresult := stream.Result()\nif err := stream.Err(); err != nil {\n\tlog.Fatal(err)\n}\nfmt.Printf(\"\\nTokens: %d in, %d out\\n\",\n\tresult.TotalUsage.InputTokens, result.TotalUsage.OutputTokens)\n```\n\nStreaming with tools:\n\n```go\nimport \"github.com/zendev-sh/goai/provider\"\n\nstream, err := goai.StreamText(ctx, model,\n\tgoai.WithPrompt(\"What's the weather in Tokyo?\"),\n\tgoai.WithTools(weatherTool),\n\tgoai.WithMaxSteps(5),\n)\nfor chunk := range stream.Stream() {\n\tswitch chunk.Type {\n\tcase provider.ChunkText:\n\t\tfmt.Print(chunk.Text)\n\tcase provider.ChunkStepFinish:\n\t\tfmt.Println(\"\\n[step complete]\")\n\t}\n}\n```\n\n## Structured Output\n\nAuto-generates JSON Schema from Go types. Works with OpenAI, Anthropic, and Google.\n\n```go\ntype Recipe struct {\n\tName        string   `json:\"name\" jsonschema:\"description=Recipe name\"`\n\tIngredients []string `json:\"ingredients\"`\n\tSteps       []string `json:\"steps\"`\n\tDifficulty  string   `json:\"difficulty\" jsonschema:\"enum=easy|medium|hard\"`\n}\n\nresult, err := goai.GenerateObject[Recipe](ctx, model,\n\tgoai.WithPrompt(\"Give me a recipe for chocolate chip cookies\"),\n)\nif err != nil {\n\tlog.Fatal(err)\n}\nfmt.Printf(\"Recipe: %s (%s)\\n\", result.Object.Name, result.Object.Difficulty)\n```\n\nStreaming partial objects:\n\n```go\nstream, err := goai.StreamObject[Recipe](ctx, model,\n\tgoai.WithPrompt(\"Give me a recipe for pancakes\"),\n)\nif err != nil {\n\tlog.Fatal(err)\n}\nfor partial := range stream.PartialObjectStream() {\n\tfmt.Printf(\"\\r%s (%d ingredients so far)\", partial.Name, len(partial.Ingredients))\n}\nfinal, err := stream.Result()\n```\n\n## Tools\n\nDefine tools with JSON Schema and an `Execute` handler. Set `MaxSteps` to enable the auto tool loop.\n\n```go\nimport \"encoding/json\"\n\nweatherTool := goai.Tool{\n\tName:        \"get_weather\",\n\tDescription: \"Get the current weather for a city.\",\n\tInputSchema: json.RawMessage(`{\n\t\t\"type\": \"object\",\n\t\t\"properties\": {\"city\": {\"type\": \"string\", \"description\": \"City name\"}},\n\t\t\"required\": [\"city\"]\n\t}`),\n\tExecute: func(ctx context.Context, input json.RawMessage) (string, error) {\n\t\tvar args struct{ City string `json:\"city\"` }\n\t\tif err := json.Unmarshal(input, \u0026args); err != nil {\n\t\t\treturn \"\", err\n\t\t}\n\t\treturn fmt.Sprintf(\"22°C and sunny in %s\", args.City), nil\n\t},\n}\n\nresult, err := goai.GenerateText(ctx, model,\n\tgoai.WithPrompt(\"What's the weather in Tokyo?\"),\n\tgoai.WithTools(weatherTool),\n\tgoai.WithMaxSteps(3),\n)\nif err != nil {\n\tlog.Fatal(err)\n}\nfmt.Println(result.Text) // \"It's 22°C and sunny in Tokyo.\"\n```\n\n## MCP (Model Context Protocol)\n\nConnect to any MCP server and use its tools with GoAI. Supports stdio, Streamable HTTP, and legacy SSE transports.\n\n```go\nimport \"github.com/zendev-sh/goai/mcp\"\n\n// Connect to any MCP server\ntransport := mcp.NewStdioTransport(\"npx\", []string{\"-y\", \"@modelcontextprotocol/server-filesystem\", \".\"})\nclient := mcp.NewClient(\"my-app\", \"1.0\", mcp.WithTransport(transport))\n_ = client.Connect(ctx)\ndefer client.Close()\n\n// Use MCP tools with GoAI\ntools, _ := client.ListTools(ctx, nil)\ngoaiTools := mcp.ConvertTools(client, tools.Tools)\n\nresult, _ := goai.GenerateText(ctx, model,\n    goai.WithTools(goaiTools...),\n    goai.WithPrompt(\"List files in the current directory\"),\n    goai.WithMaxSteps(5),\n)\n```\n\nSee [examples/mcp-tools](examples/mcp-tools/) and the [MCP documentation](https://goai.sh/concepts/mcp) for more.\n\n## Citations / Sources\n\nProviders that support grounding (Google, xAI, Perplexity) or inline citations (OpenAI) return sources:\n\n```go\nresult, err := goai.GenerateText(ctx, model,\n\tgoai.WithPrompt(\"What were the major news events today?\"),\n)\nif err != nil {\n\tlog.Fatal(err)\n}\n\nif len(result.Sources) \u003e 0 {\n\tfor _, s := range result.Sources {\n\t\tfmt.Printf(\"[%s] %s - %s\\n\", s.Type, s.Title, s.URL)\n\t}\n}\n\n// Sources are also available per-step in multi-step tool loops.\nfor _, step := range result.Steps {\n\tfor _, s := range step.Sources {\n\t\tfmt.Printf(\"  Step source: %s\\n\", s.URL)\n\t}\n}\n```\n\n## Computer Use\n\nSee [Provider-Defined Tools \u003e Computer Use](#computer-use-1) and [examples/computer-use](examples/computer-use/) for Anthropic computer, bash, and text editor tools. Works with both Anthropic direct API and Bedrock.\n\n## Embeddings\n\n```go\nctx := context.Background()\nmodel := openai.Embedding(\"text-embedding-3-small\")\n\n// Single\nresult, err := goai.Embed(ctx, model, \"Hello world\")\nif err != nil {\n\tlog.Fatal(err)\n}\nfmt.Printf(\"Dimensions: %d\\n\", len(result.Embedding))\n\n// Batch (auto-chunked, parallel)\nmany, err := goai.EmbedMany(ctx, model, []string{\"foo\", \"bar\", \"baz\"},\n\tgoai.WithMaxParallelCalls(4),\n)\nif err != nil {\n\tlog.Fatal(err)\n}\n```\n\n## Image Generation\n\n```go\nctx := context.Background()\nmodel := openai.Image(\"gpt-image-1\")\n\nresult, err := goai.GenerateImage(ctx, model,\n\tgoai.WithImagePrompt(\"A sunset over mountains, oil painting style\"),\n\tgoai.WithImageSize(\"1024x1024\"),\n)\nif err != nil {\n\tlog.Fatal(err)\n}\nos.WriteFile(\"sunset.png\", result.Images[0].Data, 0644)\n```\n\nAlso supported: Google Imagen (`google.Image(\"imagen-4.0-generate-001\")`) and Vertex AI (`vertex.Image(...)`).\n\n## Observability\n\nBuilt-in [Langfuse](https://langfuse.com) and [OpenTelemetry](https://opentelemetry.io) integrations. Nine lifecycle hooks cover the full generation pipeline -- observability providers use them to trace LLM calls, tool executions, and multi-step agent loops:\n\n```go\nimport \"github.com/zendev-sh/goai/observability/langfuse\"\n\n// Credentials from env: LANGFUSE_PUBLIC_KEY, LANGFUSE_SECRET_KEY, LANGFUSE_HOST\nresult, err := goai.GenerateText(ctx, model,\n    goai.WithPrompt(\"Hello\"),\n    goai.WithTools(weatherTool),\n    goai.WithMaxSteps(5),\n    langfuse.WithTracing(langfuse.TraceName(\"my-agent\")),\n)\n```\n\nInterceptor hooks let you control tool execution without modifying core code:\n\n```go\n// Permission gate: block dangerous tools\ngoai.WithOnBeforeToolExecute(func(info goai.BeforeToolExecuteInfo) goai.BeforeToolExecuteResult {\n    if info.ToolName == \"delete_file\" {\n        return goai.BeforeToolExecuteResult{Skip: true, Result: \"Permission denied.\"}\n    }\n    return goai.BeforeToolExecuteResult{}\n}),\n\n// Detect max-steps exhaustion\ngoai.WithOnFinish(func(info goai.FinishInfo) {\n    if info.StepsExhausted {\n        log.Printf(\"Loop exhausted after %d steps\", info.TotalSteps)\n    }\n}),\n```\n\nSee [examples/hooks](examples/hooks/), [examples/langfuse](examples/langfuse/), [examples/otel](examples/otel/), and the [observability docs](https://goai.sh/concepts/observability) for details.\n\n## Providers\n\nMany providers auto-resolve credentials from environment variables. Others (for example `ollama`, `vllm`, `compat`) use explicit options:\n\n```go\n// Auto-resolved: reads OPENAI_API_KEY from env\nmodel := openai.Chat(\"gpt-4o\")\n\n// Explicit key (overrides env)\nmodel := openai.Chat(\"gpt-4o\", openai.WithAPIKey(\"sk-...\"))\n\n// Cloud IAM auth (Vertex, Bedrock)\nmodel := vertex.Chat(\"gemini-2.5-pro\",\n\tvertex.WithProject(\"my-project\"),\n\tvertex.WithLocation(\"us-central1\"),\n)\n\n// AWS Bedrock (reads AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION from env)\nmodel := bedrock.Chat(\"anthropic.claude-sonnet-4-6-v1:0\")\n\n// Local (Ollama, vLLM)\nmodel := ollama.Chat(\"llama3\", ollama.WithBaseURL(\"http://localhost:11434/v1\"))\n\nresult, err := goai.GenerateText(ctx, model, goai.WithPrompt(\"Hello\"))\n```\n\n### Provider Table\n\n| Provider   | Chat                                                         | Embed                                                      | Image         | Auth                                                                                               | E2E  | Import                |\n| ---------- | ------------------------------------------------------------ | ---------------------------------------------------------- | ------------- | -------------------------------------------------------------------------------------------------- | ---- | --------------------- |\n| OpenAI     | `gpt-4o`, `o3`, `codex-*`                                    | `text-embedding-3-*`                                       | `gpt-image-1` | `OPENAI_API_KEY`, `OPENAI_BASE_URL`, TokenSource                                                   | Full | `provider/openai`     |\n| Anthropic  | `claude-*`                                                   | -                                                          | -             | `ANTHROPIC_API_KEY`, `ANTHROPIC_BASE_URL`, TokenSource                                             | Full | `provider/anthropic`  |\n| Google     | `gemini-*`                                                   | `text-embedding-004`                                       | `imagen-*`    | `GOOGLE_GENERATIVE_AI_API_KEY` / `GEMINI_API_KEY`, TokenSource                                     | Full | `provider/google`     |\n| Bedrock    | `anthropic.*`, `meta.*`                                      | `titan-embed-*`, `cohere.embed-*`, `nova-2-*`, `marengo-*` | -             | AWS keys, `AWS_BEARER_TOKEN_BEDROCK`, `AWS_BEDROCK_BASE_URL`                                       | Full | `provider/bedrock`    |\n| Vertex     | `gemini-*`                                                   | `text-embedding-004`                                       | `imagen-*`    | TokenSource, ADC, or `GOOGLE_API_KEY` / `GEMINI_API_KEY` / `GOOGLE_GENERATIVE_AI_API_KEY` fallback | Unit | `provider/vertex`     |\n| Azure      | `gpt-4o`, `claude-*`                                         | -                                                          | via Azure     | `AZURE_OPENAI_API_KEY`, TokenSource                                                                | Full | `provider/azure`      |\n| OpenRouter | various                                                      | -                                                          | -             | `OPENROUTER_API_KEY`, TokenSource                                                                  | Unit | `provider/openrouter` |\n| Mistral    | `mistral-large`, `magistral-*`                               | -                                                          | -             | `MISTRAL_API_KEY`, TokenSource                                                                     | Full | `provider/mistral`    |\n| Groq       | `mixtral-*`, `llama-*`                                       | -                                                          | -             | `GROQ_API_KEY`, TokenSource                                                                        | Full | `provider/groq`       |\n| xAI        | `grok-*`                                                     | -                                                          | -             | `XAI_API_KEY`, TokenSource                                                                         | Unit | `provider/xai`        |\n| Cohere     | `command-r-*`                                                | `embed-*`                                                  | -             | `COHERE_API_KEY`, TokenSource                                                                      | Unit | `provider/cohere`     |\n| DeepSeek   | `deepseek-*`                                                 | -                                                          | -             | `DEEPSEEK_API_KEY`, TokenSource                                                                    | Unit | `provider/deepseek`   |\n| MiniMax    | `MiniMax-M2.7`, `MiniMax-M2.5`, `MiniMax-M2.1`, `MiniMax-M2` | -                                                          | -             | `MINIMAX_API_KEY`, `MINIMAX_BASE_URL`, TokenSource                                                 | Full | `provider/minimax`    |\n| Fireworks  | various                                                      | -                                                          | -             | `FIREWORKS_API_KEY`, TokenSource                                                                   | Unit | `provider/fireworks`  |\n| Together   | various                                                      | -                                                          | -             | `TOGETHER_AI_API_KEY` (or `TOGETHER_API_KEY`), TokenSource                                         | Unit | `provider/together`   |\n| DeepInfra  | various                                                      | -                                                          | -             | `DEEPINFRA_API_KEY`, TokenSource                                                                   | Unit | `provider/deepinfra`  |\n| Perplexity | `sonar-*`                                                    | -                                                          | -             | `PERPLEXITY_API_KEY`, TokenSource                                                                  | Unit | `provider/perplexity` |\n| Cerebras   | `llama-*`                                                    | -                                                          | -             | `CEREBRAS_API_KEY`, TokenSource                                                                    | Unit | `provider/cerebras`   |\n| Ollama     | local models                                                 | local models                                               | -             | none                                                                                               | Unit | `provider/ollama`     |\n| vLLM       | local models                                                 | local models                                               | -             | Optional auth via `WithAPIKey` / `WithTokenSource`                                                 | Unit | `provider/vllm`       |\n| RunPod     | any vLLM model                                               | -                                                          | -             | `RUNPOD_API_KEY`, TokenSource                                                                      | Unit | `provider/runpod`     |\n| Compat     | any OpenAI-compatible                                        | any                                                        | -             | configurable                                                                                       | Unit | `provider/compat`     |\n\n**E2E column**: \"Full\" = tested with real API calls. \"Unit\" = tested with mock HTTP servers (100% coverage).\n\n### Tested Models\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eE2E tested - 103 models across 7 providers\u003c/strong\u003e (real API calls, click to expand)\u003c/summary\u003e\n\nLast run: 2026-03-27. 103 models tested (generate + stream).\n\n| Provider     | Models E2E tested (generate + stream)                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |\n| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| Google (9)   | `gemini-2.5-flash`, `gemini-2.5-flash-lite`, `gemini-2.5-pro` (stream), `gemini-3-flash-preview`, `gemini-3-pro-preview`, `gemini-3.1-pro-preview`, `gemini-2.0-flash`, `gemini-flash-latest`, `gemini-flash-lite-latest`                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |\n| Azure (21)   | `claude-opus-4-6`, `claude-sonnet-4-6`, `DeepSeek-V3.2`, `gpt-4.1`, `gpt-4.1-mini`, `gpt-5`, `gpt-5-codex`, `gpt-5-mini`, `gpt-5-pro`, `gpt-5.1`, `gpt-5.1-codex`, `gpt-5.1-codex-max`, `gpt-5.1-codex-mini`, `gpt-5.2`, `gpt-5.2-codex`, `gpt-5.3-codex`, `gpt-5.4`, `gpt-5.4-pro`, `Kimi-K2.5`, `model-router`, `o3`                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |\n| Bedrock (61) | **Anthropic**: `claude-sonnet-4-6`, `claude-sonnet-4-5`, `claude-sonnet-4`, `claude-opus-4-6-v1`, `claude-opus-4-5`, `claude-opus-4-1`, `claude-haiku-4-5`, `claude-3-5-sonnet`, `claude-3-5-haiku`, `claude-3-haiku` · **Amazon**: `nova-micro`, `nova-lite`, `nova-pro`, `nova-premier`, `nova-2-lite` · **Meta**: `llama4-scout`, `llama4-maverick`, `llama3-3-70b`, `llama3-2-{90,11,3,1}b`, `llama3-1-{70,8}b`, `llama3-{70,8}b` · **Mistral**: `mistral-large`, `mixtral-8x7b`, `mistral-7b`, `ministral-3-{14,8}b`, `voxtral-{mini,small}` · **Others**: `deepseek.v3`, `deepseek.r1`, `ai21.jamba-1-5-{mini,large}`, `cohere.command-r{-plus,}`, `google.gemma-3-{4,12,27}b`, `minimax.{m2,m2.1}`, `moonshotai.kimi-k2{-thinking,.5}`, `nvidia.nemotron-nano-{12,9}b`, `openai.gpt-oss-{120,20}b{,-safeguard}`, `qwen.qwen3-{32,235,coder-30,coder-480}b`, `qwen.qwen3-next-80b`, `writer.palmyra-{x4,x5}`, `zai.glm-4.7{,-flash}` |\n| Groq (2)     | `llama-3.1-8b-instant`, `llama-3.3-70b-versatile`                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |\n| Mistral (5)  | `mistral-small-latest`, `mistral-large-latest`, `devstral-small-2507`, `codestral-latest`, `magistral-medium-latest`                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |\n| Cerebras (1) | `llama3.1-8b`                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |\n| MiniMax (4)  | `MiniMax-M2.7`, `MiniMax-M2.5`, `MiniMax-M2.1`, `MiniMax-M2` (generate + stream + tools + thinking)                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eUnit tested\u003c/strong\u003e (mock HTTP server, 100% coverage, click to expand)\u003c/summary\u003e\n\n| Provider   | Models in unit tests                                                                               |\n| ---------- | -------------------------------------------------------------------------------------------------- |\n| OpenAI     | `gpt-4o`, `o3`, `text-embedding-3-small`, `dall-e-3`, `gpt-image-1`                                |\n| Anthropic  | `claude-sonnet-4-20250514`, `claude-sonnet-4-5-20241022`, `claude-sonnet-4-6-20260310`             |\n| Google     | `gemini-2.5-flash`, `gemini-2.5-flash-image`, `imagen-4.0-fast-generate-001`, `text-embedding-004` |\n| Bedrock    | `us.anthropic.claude-sonnet-4-6`, `anthropic.claude-sonnet-4-20250514-v1:0`, `meta.llama3-70b`     |\n| Azure      | `gpt-4o`, `gpt-5.2-chat`, `dall-e-3`, `claude-sonnet-4-6`                                          |\n| Vertex     | `gemini-2.5-pro`, `imagen-3.0-generate-002`, `text-embedding-004`                                  |\n| Cohere     | `command-r-plus`, `command-a-reasoning`, `embed-v4.0`                                              |\n| Mistral    | `mistral-large-latest`                                                                             |\n| Groq       | `llama-3.3-70b-versatile`                                                                          |\n| xAI        | `grok-3`                                                                                           |\n| DeepSeek   | `deepseek-chat`                                                                                    |\n| DeepInfra  | `meta-llama/Llama-3.3-70B-Instruct`                                                                |\n| Fireworks  | `accounts/fireworks/models/llama-v3p3-70b-instruct`                                                |\n| OpenRouter | `anthropic/claude-sonnet-4`                                                                        |\n| Perplexity | `sonar-pro`                                                                                        |\n| Together   | `meta-llama/Llama-3.3-70B-Instruct-Turbo`                                                          |\n| Cerebras   | `llama-3.3-70b`                                                                                    |\n| Ollama     | `llama3`, `llama3.2:1b`, `nomic-embed-text`                                                        |\n| vLLM       | `meta-llama/Llama-3-8b`                                                                            |\n| RunPod     | `meta-llama/Llama-3.3-70B-Instruct`                                                                |\n\n\u003c/details\u003e\n\n### Custom / Self-Hosted\n\nUse the `compat` provider for any OpenAI-compatible endpoint:\n\n```go\nmodel := compat.Chat(\"my-model\",\n\tcompat.WithBaseURL(\"https://my-api.example.com/v1\"),\n\tcompat.WithAPIKey(\"...\"),\n)\n```\n\n### Dynamic Auth with TokenSource\n\nFor OAuth, rotating keys, or cloud IAM:\n\n```go\nts := provider.CachedTokenSource(func(ctx context.Context) (*provider.Token, error) {\n\ttok, err := fetchOAuthToken(ctx)\n\treturn \u0026provider.Token{\n\t\tValue:     tok.AccessToken,\n\t\tExpiresAt: tok.Expiry,\n\t}, err\n})\n\nmodel := openai.Chat(\"gpt-4o\", openai.WithTokenSource(ts))\n```\n\n`CachedTokenSource` handles TTL-based caching (zero ExpiresAt = cache forever), thread-safe refresh without holding locks during network calls, and manual token invalidation via the `InvalidatingTokenSource` interface.\n\n### AWS Bedrock\n\nNative Converse API with SigV4 signing (no AWS SDK dependency). Supports cross-region inference fallback, extended thinking, and image/document input:\n\n```go\nmodel := bedrock.Chat(\"anthropic.claude-sonnet-4-6-v1:0\",\n\tbedrock.WithRegion(\"us-west-2\"),\n\tbedrock.WithReasoningConfig(bedrock.ReasoningConfig{\n\t\tType:         bedrock.ReasoningEnabled,\n\t\tBudgetTokens: 4096,\n\t}),\n)\n```\n\nAuto-resolves `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_REGION` from environment. Cross-region fallback retries with `us.` prefix on model ID mismatch errors.\n\n### Azure OpenAI\n\nSupports both OpenAI models (GPT, o-series) and Claude models (routed to Azure Anthropic endpoint automatically):\n\n```go\n// OpenAI models\nmodel := azure.Chat(\"gpt-4o\",\n\tazure.WithEndpoint(\"https://my-resource.openai.azure.com\"),\n)\n\n// Claude models (auto-routed to Anthropic endpoint)\nmodel := azure.Chat(\"claude-sonnet-4-6\",\n\tazure.WithEndpoint(\"https://my-resource.openai.azure.com\"),\n)\n```\n\nAuto-resolves `AZURE_OPENAI_API_KEY`, `AZURE_OPENAI_ENDPOINT` (or `AZURE_RESOURCE_NAME`) from environment.\n\n### Response Metadata\n\nEvery result includes provider response metadata:\n\n```go\nresult, _ := goai.GenerateText(ctx, model, goai.WithPrompt(\"Hello\"))\nfmt.Printf(\"Request ID: %s\\n\", result.Response.ID)\nfmt.Printf(\"Model used: %s\\n\", result.Response.Model)\n```\n\n## Options Reference\n\n### Generation Options\n\n| Option                    | Description                              | Default          |\n| ------------------------- | ---------------------------------------- | ---------------- |\n| `WithSystem(s)`           | System prompt                            | -                |\n| `WithPrompt(s)`           | Single user message                      | -                |\n| `WithMessages(...)`       | Conversation history                     | -                |\n| `WithTools(...)`          | Available tools                          | -                |\n| `WithMaxOutputTokens(n)`  | Response length limit                    | provider default |\n| `WithTemperature(t)`      | Randomness (0.0-2.0)                     | provider default |\n| `WithTopP(p)`             | Nucleus sampling                         | provider default |\n| `WithTopK(k)`             | Top-K sampling                           | provider default |\n| `WithFrequencyPenalty(p)` | Frequency penalty                        | provider default |\n| `WithPresencePenalty(p)`  | Presence penalty                         | provider default |\n| `WithSeed(s)`             | Deterministic generation                 | -                |\n| `WithStopSequences(...)`  | Stop triggers                            | -                |\n| `WithMaxSteps(n)`         | Tool loop iterations                     | 1 (no loop)      |\n| `WithMaxRetries(n)`       | Retries on 429/5xx                       | 2                |\n| `WithTimeout(d)`          | Overall timeout                          | none             |\n| `WithHeaders(h)`          | Per-request HTTP headers                 | -                |\n| `WithProviderOptions(m)`  | Provider-specific params                 | -                |\n| `WithPromptCaching(b)`    | Enable prompt caching                    | false            |\n| `WithToolChoice(tc)`      | \"auto\", \"none\", \"required\", or tool name | -                |\n\n### Lifecycle Hooks\n\n| Option                          | Description                                                      |\n| ------------------------------- | ---------------------------------------------------------------- |\n| `WithOnRequest(fn)`             | Called before each API call                                      |\n| `WithOnResponse(fn)`            | Called after each API call                                       |\n| `WithOnToolCallStart(fn)`       | Called before each tool execution begins                         |\n| `WithOnToolCall(fn)`            | Called after each tool execution                                 |\n| `WithOnStepFinish(fn)`          | Called after each tool loop step                                 |\n| `WithOnFinish(fn)`              | Called once after all steps complete (carries `StepsExhausted`)  |\n| `WithOnBeforeToolExecute(fn)`   | Intercept before tool Execute -- can skip, override ctx/input    |\n| `WithOnAfterToolExecute(fn)`    | Intercept after tool Execute -- can modify output/error          |\n| `WithOnBeforeStep(fn)`          | Intercept before step 2+ -- can inject messages or stop loop    |\n\n### Structured Output Options\n\n| Option                  | Description                                   |\n| ----------------------- | --------------------------------------------- |\n| `WithExplicitSchema(s)` | Override auto-generated JSON Schema           |\n| `WithSchemaName(n)`     | Schema name for provider (default \"response\") |\n\n### Embedding Options\n\n| Option                            | Description               | Default |\n| --------------------------------- | ------------------------- | ------- |\n| `WithMaxParallelCalls(n)`         | Batch parallelism         | 4       |\n| `WithEmbeddingProviderOptions(m)` | Embedding provider params | -       |\n\n### Image Options\n\n| Option                        | Description                    |\n| ----------------------------- | ------------------------------ |\n| `WithImagePrompt(s)`          | Text description               |\n| `WithImageCount(n)`           | Number of images               |\n| `WithImageSize(s)`            | Dimensions (e.g., \"1024x1024\") |\n| `WithAspectRatio(s)`          | Aspect ratio (e.g., \"16:9\")    |\n| `WithImageMaxRetries(n)`      | Retries on 429/5xx             |\n| `WithImageTimeout(d)`         | Overall timeout                |\n| `WithImageProviderOptions(m)` | Image provider params          |\n\n## Error Handling\n\nGoAI generation and image APIs return typed errors for actionable failure modes (MCP client APIs return `*mcp.MCPError`):\n\n```go\nresult, err := goai.GenerateText(ctx, model, goai.WithPrompt(\"...\"))\nif err != nil {\n\tvar overflow *goai.ContextOverflowError\n\tvar apiErr *goai.APIError\n\tswitch {\n\tcase errors.As(err, \u0026overflow):\n\t\t// Prompt too long - truncate and retry\n\tcase errors.As(err, \u0026apiErr):\n\t\tif apiErr.IsRetryable {\n\t\t\t// 429 rate limit, 503 - already retried MaxRetries times\n\t\t}\n\t\tfmt.Printf(\"API error %d: %s\\n\", apiErr.StatusCode, apiErr.Message)\n\t\t// HTTP API errors include ResponseBody and ResponseHeaders for debugging\n\tdefault:\n\t\t// Network error, context cancelled, etc.\n\t}\n}\n```\n\nError types:\n\n| Type                   | Fields                                                                    | When                                |\n| ---------------------- | ------------------------------------------------------------------------- | ----------------------------------- |\n| `APIError`             | `StatusCode`, `Message`, `IsRetryable`, `ResponseBody`, `ResponseHeaders` | Non-2xx API responses               |\n| `ContextOverflowError` | `Message`, `ResponseBody`                                                 | Prompt exceeds model context window |\n\nRetry behavior: automatic exponential backoff on retryable HTTP errors (429/5xx, plus OpenAI 404 propagation). `retry-after-ms` and numeric `Retry-After` (seconds) are respected. Retries apply to request-level failures (including initial stream connection), not mid-stream error events.\n\n## Provider-Defined Tools\n\nProviders expose built-in tools that the model can invoke server-side. GoAI supports 20 provider-defined tools across 5 providers:\n\n| Provider  | Tools                                                                                                                                                                  | Import               |\n| --------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------- |\n| Anthropic | `Computer`, `Computer_20251124`, `Bash`, `TextEditor`, `TextEditor_20250728`, `WebSearch`, `WebSearch_20260209`, `WebFetch`, `CodeExecution`, `CodeExecution_20250825` | `provider/anthropic` |\n| OpenAI    | `WebSearch`, `CodeInterpreter`, `FileSearch`, `ImageGeneration`                                                                                                        | `provider/openai`    |\n| Google    | `GoogleSearch`, `URLContext`, `CodeExecution`                                                                                                                          | `provider/google`    |\n| xAI       | `WebSearch`, `XSearch`                                                                                                                                                 | `provider/xai`       |\n| Groq      | `BrowserSearch`                                                                                                                                                        | `provider/groq`      |\n\nAll tools follow the same pattern: create a definition with `\u003cprovider\u003e.Tools.ToolName()` (e.g., `openai.Tools`, `anthropic.Tools`), then pass it as a `goai.Tool`:\n\n```go\n// Example: def := openai.Tools.WebSearch(openai.WithSearchContextSize(\"medium\"))\ndef := \u003cprovider\u003e.Tools.ToolName(options...)\nresult, _ := goai.GenerateText(ctx, model,\n    goai.WithTools(goai.Tool{\n        Name:                   def.Name,\n        ProviderDefinedType:    def.ProviderDefinedType,\n        ProviderDefinedOptions: def.ProviderDefinedOptions,\n    }),\n)\n```\n\n### Web Search\n\nThe model searches the web and returns grounded responses. Available from OpenAI, Anthropic, Google, and Groq.\n\n```go\n// OpenAI (via Responses API) - also works via Azure\ndef := openai.Tools.WebSearch(openai.WithSearchContextSize(\"medium\"))\n\n// Anthropic (via Messages API) - also works via Bedrock\ndef := anthropic.Tools.WebSearch(anthropic.WithMaxUses(5))\n\n// Google (grounding with Google Search) - returns Sources\ndef := google.Tools.GoogleSearch()\n// result.Sources contains grounding URLs from Google Search\n\n// Groq (interactive browser search)\ndef := groq.Tools.BrowserSearch()\n```\n\n### Code Execution\n\nThe model writes and runs code in a sandboxed environment. Server-side, no local setup needed.\n\n```go\n// OpenAI Code Interpreter - Python sandbox via Responses API\ndef := openai.Tools.CodeInterpreter()\n\n// Anthropic Code Execution - Python sandbox via Messages API\ndef := anthropic.Tools.CodeExecution() // v20260120, GA, no beta needed\n\n// Google Code Execution - Python sandbox via Gemini API\ndef := google.Tools.CodeExecution()\n```\n\n### Web Fetch\n\nClaude fetches and processes content from specific URLs directly.\n\n```go\ndef := anthropic.Tools.WebFetch(\n    anthropic.WithWebFetchMaxUses(3),\n    anthropic.WithCitations(true),\n)\n```\n\n### File Search\n\nSemantic search over uploaded files in vector stores (OpenAI Responses API).\n\n```go\ndef := openai.Tools.FileSearch(\n    openai.WithVectorStoreIDs(\"vs_abc123\"),\n    openai.WithMaxNumResults(5),\n)\n```\n\n### Image Generation\n\nLLM generates images inline during conversation (different from `goai.GenerateImage()` which calls the Images API directly).\n\n```go\ndef := openai.Tools.ImageGeneration(\n    openai.WithImageQuality(\"low\"),\n    openai.WithImageSize(\"1024x1024\"),\n)\n// On Azure, also set: azure.WithHeaders(map[string]string{\n//     \"x-ms-oai-image-generation-deployment\": \"gpt-image-1.5\",\n// })\n```\n\n### Computer Use\n\nAnthropic computer, bash, and text editor tools for autonomous desktop interaction. Client-side execution required.\n\n```go\ncomputerDef := anthropic.Tools.Computer(anthropic.ComputerToolOptions{\n    DisplayWidthPx: 1920, DisplayHeightPx: 1080,\n})\nbashDef := anthropic.Tools.Bash()\ntextEditorDef := anthropic.Tools.TextEditor()\n// Wrap each with an Execute handler for client-side execution\n```\n\n### URL Context\n\nGemini fetches and processes web content from URLs in the prompt.\n\n```go\ndef := google.Tools.URLContext()\n```\n\nSee [examples/](examples/) for complete runnable examples of each tool.\n\n## Examples\n\nSee the [examples/](examples/) directory:\n\n- [chat](examples/chat/) - Non-streaming generation\n- [streaming](examples/streaming/) - Real-time text streaming\n- [streaming-tools](examples/streaming-tools/) - Streaming with multi-step tool loops\n- [structured](examples/structured/) - Structured output with Go generics\n- [tools](examples/tools/) - Single tool call\n- [agent-loop](examples/agent-loop/) - Multi-step agent with callbacks\n- [multi-turn](examples/multi-turn/) - Multi-turn conversation with ResponseMessages\n- [citations](examples/citations/) - Accessing sources and citations\n- [hooks](examples/hooks/) - Lifecycle hooks: permission gates, secret scanning, loop control, OnFinish\n- [langfuse](examples/langfuse/) - Langfuse tracing integration\n- [otel](examples/otel/) - OpenTelemetry tracing and metrics\n- [computer-use](examples/computer-use/) - Anthropic computer, bash, and text editor tools\n- [embedding](examples/embedding/) - Embeddings with similarity search\n- [web-search](examples/web-search/) - Web search across providers (OpenAI, Anthropic, Google)\n- [web-fetch](examples/web-fetch/) - Anthropic web fetch tool\n- [code-execution](examples/code-execution/) - Anthropic code execution tool\n- [code-interpreter](examples/code-interpreter/) - OpenAI code interpreter tool\n- [google-search](examples/google-search/) - Google Search grounding with Gemini\n- [google-code-execution](examples/google-code-execution/) - Google Gemini code execution tool\n- [file-search](examples/file-search/) - OpenAI file search tool\n- [image-generation](examples/image-generation/) - OpenAI image generation via Responses API\n- [mcp-tools](examples/mcp-tools/) - MCP tools with GoAI LLM integration\n- [mcp-filesystem](examples/mcp-filesystem/) - Filesystem MCP server via stdio\n- [mcp-github](examples/mcp-github/) - GitHub MCP server via stdio\n- [mcp-playwright](examples/mcp-playwright/) - Playwright MCP server for browser automation\n- [mcp-remote](examples/mcp-remote/) - MCP over Streamable HTTP transport\n- [mcp-sse](examples/mcp-sse/) - MCP over legacy SSE transport\n- [mcp-local](examples/mcp-local/) - MCP client basics (no LLM needed)\n\n## Project Structure\n\n```\ngoai/                       # Core SDK\n├── provider/               # Provider interface + shared types\n│   ├── provider.go         # LanguageModel, EmbeddingModel, ImageModel interfaces\n│   ├── types.go            # Message, Part, Usage, StreamChunk, etc.\n│   ├── token.go            # TokenSource, CachedTokenSource\n│   ├── openai/             # OpenAI (Chat Completions + Responses API)\n│   ├── anthropic/          # Anthropic (Messages API)\n│   ├── google/             # Google Gemini (REST API)\n│   ├── bedrock/            # AWS Bedrock (Converse API + SigV4 + EventStream)\n│   ├── vertex/             # Google Vertex AI (OpenAI-compat)\n│   ├── azure/              # Azure OpenAI\n│   ├── cohere/             # Cohere (Chat v2 + Embed)\n│   ├── minimax/            # MiniMax (Anthropic-compatible API)\n│   ├── compat/             # Generic OpenAI-compatible\n│   \t└── ...                 # 13 more OpenAI-compatible providers\n├── internal/\n│   ├── openaicompat/       # Shared codec for 13 OpenAI-compat providers\n│   ├── gemini/             # Schema sanitization (Vertex, Google)\n│   ├── sse/                # SSE line parser\n│   └── httpc/              # HTTP utilities\n├── examples/               # Usage examples\n└── bench/                  # Performance benchmarks (GoAI vs Vercel AI SDK)\n    ├── fixtures/           # Shared SSE test fixtures\n    ├── go/                 # Go benchmarks (go test -bench)\n    ├── ts/                 # TypeScript benchmarks (Bun + Tinybench)\n    ├── collect.sh          # Result aggregation → report\n    └── Makefile            # make bench-all\n```\n\n## Contributing\n\nSee [CONTRIBUTING.md](CONTRIBUTING.md).\n\n## License\n\n[MIT](LICENSE)\n","funding_links":[],"categories":["Artificial Intelligence"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fzendev-sh%2Fgoai","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fzendev-sh%2Fgoai","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fzendev-sh%2Fgoai/lists"}