{"id":48983502,"url":"https://github.com/bubustack/openai-chat-engram","last_synced_at":"2026-04-18T12:01:06.125Z","repository":{"id":352188392,"uuid":"1063261024","full_name":"bubustack/openai-chat-engram","owner":"bubustack","description":"OpenAI chat Engram for bobrapet — GPT chat completions with structured output, tool dispatch, and streaming.","archived":false,"fork":false,"pushed_at":"2026-04-18T10:04:00.000Z","size":109,"stargazers_count":2,"open_issues_count":2,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-04-18T11:24:00.800Z","etag":null,"topics":["ai","batch","bubustack","chat","engram","go","gpt","kubernetes","llm","openai","streaming"],"latest_commit_sha":null,"homepage":"https://bubustack.io/","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/bubustack.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":"SUPPORT.md","governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null},"funding":{"github":["bubustack"]}},"created_at":"2025-09-24T11:39:36.000Z","updated_at":"2026-04-18T10:03:48.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/bubustack/openai-chat-engram","commit_stats":null,"previous_names":["bubustack/openai-chat-engram"],"tags_count":1,"template":false,"template_full_name":null,"purl":"pkg:github/bubustack/openai-chat-engram","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bubustack%2Fopenai-chat-engram","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bubustack%2Fopenai-chat-engram/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bubustack%2Fopenai-chat-engram/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bubustack%2Fopenai-chat-engram/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/bubustack","download_url":"https://codeload.github.com/bubustack/openai-chat-engram/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bubustack%2Fopenai-chat-engram/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31967993,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-18T00:39:45.007Z","status":"online","status_checked_at":"2026-04-18T02:00:07.018Z","response_time":103,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","batch","bubustack","chat","engram","go","gpt","kubernetes","llm","openai","streaming"],"created_at":"2026-04-18T12:00:32.952Z","updated_at":"2026-04-18T12:01:06.113Z","avatar_url":"https://github.com/bubustack.png","language":"Go","readme":"# 💬 OpenAI Chat Engram\n\nA production-ready Engram for calling OpenAI (and Azure OpenAI) Chat Completions inside bobrapet workflows. It supports rich prompting, function tools, structured outputs, and story dispatch, and runs as either a batch Job or a streaming Deployment.\n\n## 🌟 Highlights\n\n- **OpenAI \u0026 Azure OpenAI** – Auto-detects Azure when `AZURE_ENDPOINT` is present, otherwise uses OpenAI’s public API.\n- **Function tools + structured outputs** – Register tools via `spec.with.tools`; pipe structured arguments back through `structuredToolName`.\n- **Story dispatch** – Map tool invocations to downstream `StoryRun`s using `dispatchTools`.\n- **Dual mode** – Same implementation powers batch step runs and streaming deployments with backpressure.\n- **Per-request overrides** – Customize `model`, `temperature`, prompts, or history per invocation without editing configuration.\n\n## 🚀 Quick Start\n\n```bash\nmake lint\ngo test ./...\nmake docker-build\n```\n\nBundle the image in your bobrapet release or run locally with `make run` (requires bobrapet execution environment variables).\n\n## ⚙️ Configuration (`Engram.spec.with`)\n\n| Field | Type | Description | Default |\n| --- | --- | --- | --- |\n| `defaultModel` | `string` | Chat model to use when inputs omit `model`. | `gpt-3.5-turbo` |\n| `defaultTemperature` | `float32` | Base temperature for sampling. | `0.7` |\n| `defaultSystemPrompt` | `string` | Global system prompt for the Engram. | `\"You are a helpful assistant.\"` |\n| `defaultTopP` | `float32` | Default nucleus sampling probability mass. | `0.9` |\n| `defaultMaxTokens` | `int` | Legacy total-token cap. | `0` |\n| `defaultMaxCompletionTokens` | `int` | Preferred completion-token cap. | `0` |\n| `defaultPresencePenalty` | `float32` | Default presence penalty when no override is provided. | `0` |\n| `defaultFrequencyPenalty` | `float32` | Default frequency penalty when no override is provided. | `0` |\n| `defaultReasoningEffort` | `string` | Reasoning hint for supported models (`minimal`, `low`, `medium`, `high`). | unset |\n| `defaultServiceTier` | `string` | Preferred OpenAI service tier (`auto`, `default`, `flex`, `scale`, `priority`). | unset |\n| `defaultVerbosity` | `string` | Verbosity hint for supported models (`low`, `medium`, `high`). | unset |\n| `defaultModalities` | `[]string` | Default output modalities such as `text` or `audio`. | unset |\n| `defaultStopSequences` | `[]string` | Sequences that stop generation when encountered. | unset |\n| `defaultStore` | `bool` | Allow OpenAI to store responses by default. | unset |\n| `defaultParallelToolCalls` | `bool` | Enable parallel tool calls by default. | unset |\n| `defaultLogprobs` | `bool` | Request token log probabilities by default. | unset |\n| `defaultTopLogprobs` | `int` | Number of logprob candidates to include when logprobs are enabled. | unset |\n| `defaultChoices` | `int` | Default value for the OpenAI `n` parameter. | unset |\n| `defaultSeed` | `int` | Deterministic seed for supported models. | unset |\n| `defaultPromptCacheKey` | `string` | Prompt cache key forwarded to OpenAI. | unset |\n| `defaultSafetyIdentifier` | `string` | Default safety identifier for abuse monitoring. | unset |\n| `defaultUser` | `string` | Default end-user identifier sent to OpenAI. | unset |\n| `defaultMetadata` | `map[string]string` | Default metadata map attached to each request. | unset |\n| `defaultLogitBias` | `map[int]int` | Default logit bias map (token ID to bias). | unset |\n| `responseFormat` | `object` | Structured output configuration with `type` and optional `jsonSchema.{name,description,strict,schema}`. | unset |\n| `audio` | `object` | Default multimodal audio response config with `format` and `voice`. | unset |\n| `tools` | `[]ToolSpec` | Function tools exposed to the model. | `[]` |\n| `functions` | `[]ToolSpec` | Legacy alias for `tools`; merged and deduplicated by tool name. | `[]` |\n| `toolChoice` | `string` | `auto`, `required`, `none`, or `function:\u003cname\u003e`. Controls tool selection hints. | `auto` |\n| `functionCall` | `string` | Legacy alias for `toolChoice`. | `\"\"` |\n| `allowedTools` | `object` | Restrict tool use to a curated set via `mode` and explicit tool names. | unset |\n| `webSearch` | `object` | Default web search options including `searchContextSize` and `userLocation`. | unset |\n| `structuredToolName` | `string` | Tool whose arguments are returned in `Result.Structured`. | `\"\"` |\n| `dispatchTools` | `map[string]string` | Map `toolName → storyName` for auto-dispatch when the tool doesn’t return its own `storyName`. | `{}` |\n| `useResponsesAPI` | `bool` | Route requests through the Responses API instead of Chat Completions. | `false` |\n\nExample:\n\n```yaml\ndefaultModel: gpt-4o-mini\ndefaultTemperature: 0.2\ntools:\n  - name: run_story\n    description: Trigger a follow-up story with inputs\n    parameters:\n      type: object\n      properties:\n        storyName: { type: string }\n        inputs: { type: object }\nstructuredToolName: extract_fields\ndispatchTools:\n  run_story: create-ticket-story\n```\n\n### Structured Tool and Dispatch Behavior\n\nWhen `structuredToolName` is set, the model's call to that tool extracts its JSON arguments into `Result.Structured` instead of executing the tool. This is useful for structured data extraction (e.g., extracting fields from a conversation).\n\nWhen `dispatchTools` is configured, a tool call matching a key in the map automatically triggers the mapped Story. The tool's arguments become the StoryRun inputs. If both `structuredToolName` and `dispatchTools` reference the same tool, structured extraction takes precedence.\n\n```json\n// Example output when structuredToolName = \"extract_fields\"\n{\n  \"structured\": {\n    \"customer_name\": \"Alice\",\n    \"issue_type\": \"billing\",\n    \"priority\": \"high\"\n  },\n  \"text\": \"I've extracted the relevant fields from the conversation.\"\n}\n```\n\n## 🔐 Secrets\n\n| Mode | Environment Prefix | Required Keys | Optional Keys |\n| --- | --- | --- | --- |\n| OpenAI | `OPENAI_` | `API_KEY` | `BASE_URL`, `ORG_ID`, `PROJECT_ID` |\n| Azure OpenAI | `AZURE_` | `API_KEY`, `ENDPOINT` | `API_VERSION` (default `2024-06-01`), `DEPLOYMENT` |\n\nSecrets are mounted via `mountType: env` in the EngramTemplate.\n\n## 📥 Inputs\n\nInputs are provided through the StepRun `spec.inputs` or streaming `StreamMessage`. Supported fields:\n\n| Field | Description |\n| --- | --- |\n| `history` | Array of prior messages (`role` ∈ `system`, `developer`, `user`, `assistant`). `developer` is mapped to system. |\n| `systemPrompt`, `developerPrompt`, `userPrompt`, `assistantPrompt` | Prompt fragments merged in this order. |\n| `temperature` | Overrides temperature for this request. |\n| `model` | Overrides model for this request. |\n\nAdvanced request-time overrides mirror the template schema. You can override\n`topP`, `maxTokens`, `maxCompletionTokens`, `presencePenalty`, `frequencyPenalty`,\n`stop`, `modalities`, `store`, `parallelToolCalls`, `logprobs`, `topLogprobs`,\n`seed`, `serviceTier`, `reasoningEffort`, `verbosity`, `responseFormat`,\n`metadata`, `logitBias`, `audio`, `promptCacheKey`, `safetyIdentifier`, `user`,\n`tools`, `functions`, `toolChoice`, `functionCall`, `choices`, `allowedTools`,\n`useResponsesAPI`, `prediction`, and `webSearch` per invocation.\n\nExample payload:\n\n```json\n{\n  \"history\": [\n    { \"role\": \"user\", \"content\": \"Create a ticket for server outage.\" }\n  ],\n  \"systemPrompt\": \"Be concise.\",\n  \"userPrompt\": \"Priority should be high and tags: ops, urgent\",\n  \"temperature\": 0.1,\n  \"model\": \"gpt-4o-mini\"\n}\n```\n\n## 📤 Outputs\n\nThe Engram returns:\n\n| Field | Description |\n| --- | --- |\n| `text` | Final model message concatenated as plain text. |\n| `structured` | Parsed JSON arguments from the tool named in `structuredToolName`. |\n| `actionRequest` | When a tool listed in `dispatchTools` fires; contains `storyName` and optional `inputs`. |\n| `toolCalls` | Raw tool call invocations (`name`, `arguments` as JSON). |\n\n## 🔄 Streaming Mode\n\nStreaming consumes `engram.InboundMessage` on input and emits\n`engram.StreamMessage` on output:\n\n- If `Inputs` is set, it is decoded into the request payload described above.\n- If `Inputs` is empty, the Engram attempts to decode `Payload` as the same structure.\n- `Metadata` is copied and enriched with `type`, `provider`, and `model` fields.\n- Call `msg.Done()` after successful handling so ordered/replay-capable transports can advance acknowledgement state.\n\nResponses are emitted as `StreamMessage` with `Payload` containing the\nJSON-encoded output map shown above, and the same bytes mirrored into `Binary`\nwith `MimeType: application/json`.\n\n## 🧪 Local Development\n\n- `make lint` – GolangCI-Lint using the shared config.\n- `go test ./...` – Compile and run unit tests.\n- `make run` – Execute locally against the bobrapet runtime environment.\n- `make docker-build` – Multi-stage image build (honours `IMG`).\n\n## 🤝 Community \u0026 Support\n\n- [Contributing](./CONTRIBUTING.md)\n- [Support](./SUPPORT.md)\n- [Security Policy](./SECURITY.md)\n- [Code of Conduct](./CODE_OF_CONDUCT.md)\n- [Discord](https://discord.gg/dysrB7D8H6)\n\n\n## 📄 License\n\nCopyright 2025 BubuStack.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n","funding_links":["https://github.com/sponsors/bubustack"],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbubustack%2Fopenai-chat-engram","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fbubustack%2Fopenai-chat-engram","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbubustack%2Fopenai-chat-engram/lists"}