{"id":27496330,"url":"https://github.com/actions/ai-inference","last_synced_at":"2026-02-06T18:13:06.169Z","repository":{"id":287942465,"uuid":"960077787","full_name":"actions/ai-inference","owner":"actions","description":"An action for calling AI models with GitHub Models","archived":false,"fork":false,"pushed_at":"2025-04-14T18:22:14.000Z","size":1278,"stargazers_count":43,"open_issues_count":3,"forks_count":2,"subscribers_count":2,"default_branch":"main","last_synced_at":"2025-04-14T19:33:13.721Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/actions.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":"CODEOWNERS","security":"SECURITY.md","support":"SUPPORT.md","governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2025-04-03T20:27:55.000Z","updated_at":"2025-04-14T10:27:51.000Z","dependencies_parsed_at":"2025-04-14T19:33:38.067Z","dependency_job_id":"490d49f1-023c-46cc-af54-dd05433cfcf9","html_url":"https://github.com/actions/ai-inference","commit_stats":null,"previous_names":["actions/ai-inference"],"tags_count":5,"template":false,"template_full_name":"actions/typescript-action","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/actions%2Fai-inference","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/actions%2Fai-inference/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/actions%2Fai-inference/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/actions%2Fai-inference/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/actions","download_url":"https://codeload.github.com/actions/ai-inference/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":249317560,"owners_count":21250161,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-04-17T04:47:08.226Z","updated_at":"2026-02-06T18:13:06.163Z","avatar_url":"https://github.com/actions.png","language":"TypeScript","readme":"# AI Inference in GitHub Actions\n\n[![GitHub Super-Linter](https://github.com/actions/typescript-action/actions/workflows/linter.yml/badge.svg)](https://github.com/super-linter/super-linter)\n![CI](https://github.com/actions/typescript-action/actions/workflows/ci.yml/badge.svg)\n[![Check dist/](https://github.com/actions/typescript-action/actions/workflows/check-dist.yml/badge.svg)](https://github.com/actions/typescript-action/actions/workflows/check-dist.yml)\n[![CodeQL](https://github.com/actions/typescript-action/actions/workflows/codeql-analysis.yml/badge.svg)](https://github.com/actions/typescript-action/actions/workflows/codeql-analysis.yml)\n\nUse AI models from [GitHub Models](https://github.com/marketplace/models) in\nyour workflows.\n\n## Usage\n\nCreate a workflow to use the AI inference action:\n\n```yaml\nname: 'AI inference'\non: workflow_dispatch\n\njobs:\n  inference:\n    permissions:\n      models: read\n    runs-on: ubuntu-latest\n    steps:\n      - name: Test Local Action\n        id: inference\n        uses: actions/ai-inference@v1\n        with:\n          prompt: 'Hello!'\n\n      - name: Print Output\n        id: output\n        run: echo \"${{ steps.inference.outputs.response }}\"\n```\n\n### Using a prompt file\n\nYou can also provide a prompt file instead of an inline prompt. The action\nsupports both plain text files and structured `.prompt.yml` files:\n\n```yaml\nsteps:\n  - name: Run AI Inference with Text File\n    id: inference\n    uses: actions/ai-inference@v1\n    with:\n      prompt-file: './path/to/prompt.txt'\n```\n\n### Using GitHub prompt.yml files\n\nFor more advanced use cases, you can use structured `.prompt.yml` files that\nsupport templating, custom models, and JSON schema responses:\n\n```yaml\nsteps:\n  - name: Run AI Inference with Prompt YAML\n    id: inference\n    uses: actions/ai-inference@v1\n    with:\n      prompt-file: './.github/prompts/sample.prompt.yml'\n      input: |\n        var1: hello\n        var2: ${{ steps.some-step.outputs.output }}\n        var3: |\n          Lorem Ipsum\n          Hello World\n      file_input: |\n        var4: ./path/to/long-text.txt\n        var5: ./path/to/config.json\n```\n\n#### Simple prompt.yml example\n\n```yaml\nmessages:\n  - role: system\n    content: Be as concise as possible\n  - role: user\n    content: 'Compare {{a}} and {{b}}, please'\nmodel: openai/gpt-4o\n```\n\n#### Prompt.yml with JSON schema support\n\n```yaml\nmessages:\n  - role: system\n    content: You are a helpful assistant that describes animals using JSON format\n  - role: user\n    content: |-\n      Describe a {{animal}}\n      Use JSON format as specified in the response schema\nmodel: openai/gpt-4o\nresponseFormat: json_schema\njsonSchema: |-\n  {\n    \"name\": \"describe_animal\",\n    \"strict\": true,\n    \"schema\": {\n      \"type\": \"object\",\n      \"properties\": {\n        \"name\": {\n          \"type\": \"string\",\n          \"description\": \"The name of the animal\"\n        },\n        \"habitat\": {\n          \"type\": \"string\",\n          \"description\": \"The habitat the animal lives in\"\n        }\n      },\n      \"additionalProperties\": false,\n      \"required\": [\n        \"name\",\n        \"habitat\"\n      ]\n    }\n  }\n```\n\nVariables in prompt.yml files are templated using `{{variable}}` format and are\nsupplied via the `input` parameter in YAML format. Additionally, you can\nprovide file-based variables via `file_input`, where each key maps to a file\npath.\n\n### Using a system prompt file\n\nIn addition to the regular prompt, you can provide a system prompt file instead\nof an inline system prompt:\n\n```yaml\nsteps:\n  - name: Run AI Inference with System Prompt File\n    id: inference\n    uses: actions/ai-inference@v1\n    with:\n      prompt: 'Hello!'\n      system-prompt-file: './path/to/system-prompt.txt'\n```\n\n### Read output from file instead of output\n\nThis can be useful when model response exceeds actions output limit\n\n```yaml\nsteps:\n  - name: Test Local Action\n    id: inference\n    uses: actions/ai-inference@v1\n    with:\n      prompt: 'Hello!'\n\n  - name: Use Response File\n    run: |\n      echo \"Response saved to: ${{ steps.inference.outputs.response-file }}\"\n      cat \"${{ steps.inference.outputs.response-file }}\"\n```\n\n### Using custom headers\n\nYou can include custom HTTP headers in your API requests, which is useful for integrating with API Management platforms, adding tracking information, or routing requests through custom gateways.\n\n#### YAML format (recommended for multiple headers)\n\n```yaml\nsteps:\n  - name: AI Inference with Azure APIM\n    id: inference\n    uses: actions/ai-inference@v1\n    with:\n      prompt: 'Analyze this code for security issues...'\n      endpoint: ${{ secrets.APIM_ENDPOINT }}\n      token: ${{ secrets.APIM_KEY }}\n      custom-headers: |\n        Ocp-Apim-Subscription-Key: ${{ secrets.APIM_SUBSCRIPTION_KEY }}\n        serviceName: code-review-workflow\n        env: production\n        team: security\n        computer: github-actions\n```\n\n#### JSON format (alternative for compact syntax)\n\n```yaml\nsteps:\n  - name: AI Inference with Custom Headers\n    id: inference\n    uses: actions/ai-inference@v1\n    with:\n      prompt: 'Hello!'\n      custom-headers: '{\"X-Custom-Header\": \"value\", \"X-Team\": \"engineering\", \"X-Request-ID\": \"${{ github.run_id }}\"}'\n```\n\n#### Use cases for custom headers\n\n- **API Management**: Integrate with Azure APIM, AWS API Gateway, Kong, or other API management platforms\n- **Request tracking**: Add correlation IDs, request IDs, or workflow identifiers\n- **Rate limiting**: Include quota or tier information for custom rate limiting\n- **Multi-tenancy**: Identify teams, services, or environments\n- **Observability**: Add metadata for logging, monitoring, and debugging\n- **Routing**: Control request routing through custom gateways or load balancers\n\n**Header name requirements**: Header names must follow the HTTP token syntax defined in RFC 7230 (which permits underscores). For maximum compatibility with intermediaries and tooling, we recommend using only alphanumeric characters and hyphens.\n\n**Security note**: Always use GitHub secrets for sensitive header values like API keys, tokens, or passwords. The action automatically masks common sensitive headers (containing `key`, `token`, `secret`, `password`, or `authorization`) in logs.\n\n### GitHub MCP Integration (Model Context Protocol)\n\nThis action now supports **read-only** integration with the GitHub-hosted Model\nContext Protocol (MCP) server, which provides access to GitHub tools like\nrepository management, issue tracking, and pull request operations.\n\n#### Authentication\n\nYou can authenticate the MCP server with **either**:\n\n1. **Personal Access Token (PAT)** – user-scoped token\n2. **GitHub App Installation Token** (`ghs_…`) – short-lived, app-scoped token\n   \u003e The built-in `GITHUB_TOKEN` is **not** accepted by the MCP server.\n   \u003e Using a **GitHub App installation token** is recommended in most CI environments because it is short-lived and least-privilege by design.\n\n#### Enabling MCP in the action\n\nSet `enable-github-mcp: true` and provide a token via `github-mcp-token`.\n\n```yaml\nsteps:\n  - name: AI Inference with GitHub Tools\n    id: inference\n    uses: actions/ai-inference@v1.2\n    with:\n      prompt: 'List my open pull requests and create a summary'\n      enable-github-mcp: true\n      token: ${{ secrets.USER_PAT }} # or a ghs_ installation token\n```\n\nIf you want, you can use separate tokens for the AI inference endpoint\nand the GitHub MCP server:\n\n```yaml\nsteps:\n  - name: AI Inference with Separate MCP Token\n    id: inference\n    uses: actions/ai-inference@v1.2\n    with:\n      prompt: 'List my open pull requests and create a summary'\n      enable-github-mcp: true\n      token: ${{ secrets.GITHUB_TOKEN }}\n      github-mcp-token: ${{ secrets.USER_PAT }} # or a ghs_ installation token\n```\n\n#### Configuring GitHub MCP Toolsets\n\nBy default, the GitHub MCP server provides a standard set of tools (`context`, `repos`, `issues`, `pull_requests`, `users`). You can customize which toolsets are available by specifying the `github-mcp-toolsets` parameter:\n\n```yaml\nsteps:\n  - name: AI Inference with Custom Toolsets\n    id: inference\n    uses: actions/ai-inference@v2\n    with:\n      prompt: 'Analyze recent workflow runs and check security alerts'\n      enable-github-mcp: true\n      token: ${{ secrets.USER_PAT }}\n      github-mcp-toolsets: 'repos,issues,pull_requests,actions,code_security'\n```\n\n**Available toolsets:**\nSee: [Tool configuration](https://github.com/github/github-mcp-server/blob/main/README.md#tool-configuration)\n\nWhen MCP is enabled, the AI model will have access to GitHub tools and can\nperform actions like searching issues and PRs.\n\n## Inputs\n\nVarious inputs are defined in [`action.yml`](action.yml) to let you configure\nthe action:\n\n| Name                 | Description                                                                                                                                                                                                        | Default                              |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------ |\n| `token`              | Token to use for inference. Typically the GITHUB_TOKEN secret                                                                                                                                                      | `github.token`                       |\n| `prompt`             | The prompt to send to the model                                                                                                                                                                                    | N/A                                  |\n| `prompt-file`        | Path to a file containing the prompt (supports .txt and .prompt.yml formats). If both `prompt` and `prompt-file` are provided, `prompt-file` takes precedence                                                      | `\"\"`                                 |\n| `input`              | Template variables in YAML format for .prompt.yml files (e.g., `var1: value1` on separate lines)                                                                                                                   | `\"\"`                                 |\n| `file_input`         | Template variables in YAML where values are file paths. The file contents are read and used for templating                                                                                                         | `\"\"`                                 |\n| `system-prompt`      | The system prompt to send to the model                                                                                                                                                                             | `\"You are a helpful assistant\"`      |\n| `system-prompt-file` | Path to a file containing the system prompt. If both `system-prompt` and `system-prompt-file` are provided, `system-prompt-file` takes precedence                                                                  | `\"\"`                                 |\n| `model`              | The model to use for inference. Must be available in the [GitHub Models](https://github.com/marketplace?type=models) catalog                                                                                       | `openai/gpt-4o`                      |\n| `endpoint`           | The endpoint to use for inference. If you're running this as part of an org, you should probably use the org-specific Models endpoint                                                                              | `https://models.github.ai/inference` |\n| `max-tokens`         | The max number of tokens to generate                                                                                                                                                                               | 200                                  |\n| `temperature`        | The sampling temperature to use (0-1)                                                                                                                                                                              | `\"\"`                                 |\n| `top-p`              | The nucleus sampling parameter to use (0-1)                                                                                                                                                                        | `\"\"`                                 |\n| `enable-github-mcp`  | Enable Model Context Protocol integration with GitHub tools                                                                                                                                                        | `false`                              |\n| `github-mcp-token`   | Token to use for GitHub MCP server (defaults to the main token if not specified).                                                                                                                                  | `\"\"`                                 |\n| `custom-headers`     | Custom HTTP headers to include in API requests. Supports both YAML format (`header1: value1`) and JSON format (`{\"header1\": \"value1\"}`). Useful for API Management platforms, rate limiting, and request tracking. | `\"\"`                                 |\n\n## Outputs\n\nThe AI inference action provides the following outputs:\n\n| Name            | Description                                                             |\n| --------------- | ----------------------------------------------------------------------- |\n| `response`      | The response from the model                                             |\n| `response-file` | The file path where the response is saved (useful for larger responses) |\n\n## Required Permissions\n\nIn order to run inference with GitHub Models, the GitHub AI inference action\nrequires `models` permissions.\n\n```yml\npermissions:\n  contents: read\n  models: read\n```\n\n## Publishing a New Release\n\nThis project includes a helper script, [`script/release`](./script/release)\ndesigned to streamline the process of tagging and pushing new releases for\nGitHub Actions. For more information, see\n[Versioning](https://github.com/actions/toolkit/blob/master/docs/action-versioning.md)\nin the GitHub Actions toolkit.\n\nGitHub Actions allows users to select a specific version of the action to use,\nbased on release tags. This script simplifies this process by performing the\nfollowing steps:\n\n1. **Retrieving the latest release tag:** The script starts by fetching the most\n   recent SemVer release tag of the current branch, by looking at the local data\n   available in your repository.\n1. **Prompting for a new release tag:** The user is then prompted to enter a new\n   release tag. To assist with this, the script displays the tag retrieved in\n   the previous step, and validates the format of the inputted tag (vX.X.X). The\n   user is also reminded to update the version field in package.json.\n1. **Tagging the new release:** The script then tags a new release and syncs the\n   separate major tag (e.g. v1, v2) with the new release tag (e.g. v1.0.0,\n   v2.1.2). When the user is creating a new major release, the script\n   auto-detects this and creates a `releases/v#` branch for the previous major\n   version.\n1. **Pushing changes to remote:** Finally, the script pushes the necessary\n   commits, tags and branches to the remote repository. From here, you will need\n   to create a new release in GitHub so users can easily reference the new tags\n   in their workflows.\n\n## License\n\nThis project is licensed under the terms of the MIT open source license. Please\nrefer to [MIT](./LICENSE.txt) for the full terms.\n\n## Contributions\n\nContributions are welcome! See the [Contributor's Guide](CONTRIBUTING.md).\n","funding_links":[],"categories":["Programming Frameworks"],"sub_categories":["YAML (GitHub Actions Yaml)"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Factions%2Fai-inference","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Factions%2Fai-inference","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Factions%2Fai-inference/lists"}