{"id":28513079,"url":"https://github.com/OP5dev/Prompt-AI","last_synced_at":"2025-07-04T06:31:38.744Z","repository":{"id":297800255,"uuid":"997906825","full_name":"OP5dev/Prompt-AI","owner":"OP5dev","description":"AI inference request GitHub Models via this GitHub Action.","archived":false,"fork":false,"pushed_at":"2025-06-19T12:03:30.000Z","size":39,"stargazers_count":7,"open_issues_count":0,"forks_count":0,"subscribers_count":2,"default_branch":"main","last_synced_at":"2025-06-22T03:06:12.735Z","etag":null,"topics":["automation","devops","genai","github-actions","github-models-ai","llm-inference"],"latest_commit_sha":null,"homepage":"https://OP5.dev/s/prompt-ai","language":null,"has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/OP5dev.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":".github/CODEOWNERS","security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null},"funding":{"github":["rdhar","op5dev"]}},"created_at":"2025-06-07T12:58:17.000Z","updated_at":"2025-06-19T18:35:48.000Z","dependencies_parsed_at":"2025-06-14T22:48:18.652Z","dependency_job_id":"2e28d2a0-215e-4d62-bfca-2325e74a4d98","html_url":"https://github.com/OP5dev/Prompt-AI","commit_stats":null,"previous_names":["op5dev/inference-request","op5dev/ai-inference-request","op5dev/prompt-ai"],"tags_count":5,"template":false,"template_full_name":null,"purl":"pkg:github/OP5dev/Prompt-AI","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OP5dev%2FPrompt-AI","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OP5dev%2FPrompt-AI/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OP5dev%2FPrompt-AI/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OP5dev%2FPrompt-AI/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/OP5dev","download_url":"https://codeload.github.com/OP5dev/Prompt-AI/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OP5dev%2FPrompt-AI/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":261229095,"owners_count":23127555,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["automation","devops","genai","github-actions","github-models-ai","llm-inference"],"created_at":"2025-06-09T01:06:10.172Z","updated_at":"2025-07-04T06:31:38.735Z","avatar_url":"https://github.com/OP5dev.png","language":null,"readme":"[![GitHub license](https://img.shields.io/github/license/op5dev/prompt-ai?logo=apache\u0026label=License)](LICENSE \"Apache License 2.0.\")\n[![GitHub release tag](https://img.shields.io/github/v/release/op5dev/prompt-ai?logo=semanticrelease\u0026label=Release)](https://github.com/op5dev/prompt-ai/releases \"View all releases.\")\n*\n[![GitHub repository stargazers](https://img.shields.io/github/stars/op5dev/prompt-ai)](https://github.com/op5dev/prompt-ai \"Become a stargazer.\")\n\n# Prompt GitHub AI Models via GitHub Action\n\n\u003e [!TIP]\n\u003e Prompt GitHub AI Models using [inference request](https://docs.github.com/en/rest/models/inference?apiVersion=2022-11-28#run-an-inference-request \"GitHub API documentation.\") via GitHub Action API.\n\n\u003c/br\u003e\n\n## Usage Examples\n\n[Compare available AI models](https://docs.github.com/en/copilot/using-github-copilot/ai-models/choosing-the-right-ai-model-for-your-task \"Comparison of AI models for GitHub.\") to choose the best one for your use-case.\n\n### Summarize GitHub Issues\n\n```yml\non:\n  issues:\n    types: opened\n\njobs:\n  summary:\n    runs-on: ubuntu-latest\n\n    permissions:\n      issues: write\n      models: read\n\n    steps:\n      - name: Summarize issue\n        id: prompt\n        uses: op5dev/prompt-ai@v2\n        with:\n          user-prompt: |\n            Concisely summarize the GitHub issue\n            with title '${{ github.event.issue.title }}'\n            and body: ${{ github.event.issue.body }}\n          max_tokens: 250\n\n      - name: Comment summary\n        run: gh issue comment $NUMBER --body \"$SUMMARY\"\n        env:\n          GH_TOKEN: ${{ github.token }}\n          NUMBER: ${{ github.event.issue.number }}\n          SUMMARY: ${{ steps.prompt.outputs.response }}\n```\n\n### Troubleshoot Terraform Infrastructure-as-Code\n\n```yml\non:\n  pull_request:\n  push:\n    branches: main\n\njobs:\n  provision:\n    runs-on: ubuntu-latest\n\n    permissions:\n      actions: read\n      checks: write\n      contents: read\n      pull-requests: write\n      models: read\n\n    steps:\n      - name: Checkout repository\n        uses: actions/checkout@v4\n\n      - name: Setup Terraform\n        uses: hashicorp/setup-terraform@v3\n\n      - name: Provision Terraform\n        id: provision\n        uses: op5dev/tf-via-pr@v13\n        with:\n          working-directory: env/dev\n          command: ${{ github.event_name == 'push' \u0026\u0026 'apply' || 'plan' }}\n\n      - name: Troubleshoot Terraform\n        if: failure()\n        uses: op5dev/prompt-ai@v2\n        with:\n          model: openai/gpt-4.1-mini\n          system-prompt: You are a helpful DevOps assistant and expert at troubleshooting Terraform errors.\n          user-prompt: Troubleshoot the following Terraform output; ${{ steps.provision.outputs.result }}\n          max-tokens: 500\n```\n\n### Debug Nomad-Pack Deployment\n\n```yml\non:\n  pull_request:\n  push:\n    branches: main\n\njobs:\n  deploy:\n    runs-on: ubuntu-latest\n\n    permissions:\n      contents: read\n      models: read\n\n    steps:\n      - name: Checkout repository\n        uses: actions/checkout@v4\n\n      - name: Setup Nomad-Pack\n        uses: hashicorp/setup-nomad-pack@main\n\n      - name: Run Nomad-Pack\n        id: nomad\n        run: |\n          nomad-pack run . --verbose 2\u003e\u00261 | tee log.txt\n          status=${PIPESTATUS[0]}\n          if [[ $status -ne 0 ]]; then\n            echo \"log\u003c\u003cEOH\" \u003e\u003e \"$GITHUB_OUTPUT\"\n            cat log.txt \u003e\u003e \"$GITHUB_OUTPUT\"\n            echo \"EOH\" \u003e\u003e \"$GITHUB_OUTPUT\"\n          fi\n          exit $status\n\n      - name: Debug Nomad-Pack\n        if: failure()\n        uses: op5dev/prompt-ai@v2\n        with:\n          model: openai/gpt-4.1-mini\n          system-prompt: You are a helpful DevOps assistant and expert at debugging Nomad-Pack deployments.\n          user-prompt: Debug the following Nomad-Pack deployment log; ${{ steps.nomad.outputs.log }}\n          temperature: 0.7\n          top-p: 0.9\n```\n\n\u003c/br\u003e\n\n## Inputs\n\nThe only required input is `user-prompt`, while every parameter can be tuned per [documentation](https://docs.github.com/en/rest/models/inference?apiVersion=2022-11-28#run-an-inference-request \"GitHub API documentation.\").\n\n| Type       | Name                   | Description                                                                                                                                                                                                                                   |\n| ---------- | ---------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| Common     | `model`                | Model ID to use for the inference request.\u003c/br\u003e(e.g., `openai/gpt-4.1-mini`)                                                                                                                                                                  |\n| Common     | `system-prompt`        | Prompt associated with the `system` role.\u003c/br\u003e(e.g., `You are a helpful software engineering assistant`)                                                                                                                                      |\n| Common     | `user-prompt`          | Prompt associated with the `user` role.\u003c/br\u003e(e.g., `List best practices for workflows with GitHub Actions`)                                                                                                                                   |\n| Common     | `max-tokens`           | The maximum number of tokens to generate in the completion. The token count of your prompt plus `max-tokens` cannot exceed the model's context length.\u003c/br\u003e(e.g., `100`)                                                                      |\n| Common     | `temperature`          | The sampling temperature to use that controls the apparent creativity of generated completions. Higher values will make output more random while lower values will make results more focused and deterministic.\u003c/br\u003e(e.g., range is `[0, 1]`) |\n| Common     | `top-p`                | An alternative to sampling with temperature called nucleus sampling. This value causes the model to consider the results of tokens with the provided probability mass.\u003c/br\u003e(e.g., range is `[0, 1]`)                                          |\n| Additional | `frequency-penalty`    | A value that influences the probability of generated tokens appearing based on their cumulative frequency in generated text.\u003c/br\u003e(e.g., range is `[-2, 2]`)                                                                                   |\n| Additional | `modalities`           | The modalities that the model is allowed to use for the chat completions response.\u003c/br\u003e(e.g., from `text` and `audio`)                                                                                                                        |\n| Additional | `org`                  | Organization to which the request is to be attributed.\u003c/br\u003e(e.g., `github.repository_owner`)                                                                                                                                                  |\n| Additional | `presence-penalty`     | A value that influences the probability of generated tokens appearing based on their existing presence in generated text.\u003c/br\u003e(e.g., range is `[-2, 2]`)                                                                                      |\n| Additional | `seed`                 | If specified, the system will make a best effort to sample deterministically such that repeated requests with the same seed and parameters should return the same result.\u003c/br\u003e(e.g., `123456789`)                                             |\n| Additional | `stop`                 | A collection of textual sequences that will end completion generation.\u003c/br\u003e(e.g., `[\"\\n\\n\", \"END\"]`)                                                                                                                                          |\n| Additional | `stream`               | A value indicating whether chat completions should be streamed for this request.\u003c/br\u003e(e.g., `false`)                                                                                                                                          |\n| Additional | `stream-include-usage` | Whether to include usage information in the response.\u003c/br\u003e(e.g., `false`)                                                                                                                                                                     |\n| Additional | `tool-choice`          | If specified, the model will configure which of the provided tools it can use for the chat completions response.\u003c/br\u003e(e.g., 'auto', 'required', or 'none')                                                                                    |\n| Payload    | `payload`              | Body parameters of the inference request in JSON format.\u003c/br\u003e(e.g., `{\"model\"…`)                                                                                                                                                              |\n| Payload    | `payload-file`         | Path to a JSON file containing the body parameters of the inference request.\u003c/br\u003e(e.g., `./payload.json`)                                                                                                                                     |\n| Payload    | `show-payload`         | Whether to show the body parameters in the workflow log.\u003c/br\u003e(e.g., `false`)                                                                                                                                                                  |\n| Payload    | `show-response`        | Whether to show the response content in the workflow log.\u003c/br\u003e(e.g., `true`)                                                                                                                                                                  |\n| GitHub     | `github-api-version`   | GitHub API version.\u003c/br\u003e(e.g., `2022-11-28`)                                                                                                                                                                                                  |\n| GitHub     | `github-token`         | GitHub token for authorization.\u003c/br\u003e(e.g., `github.token`)                                                                                                                                                                                    |\n\n\u003c/br\u003e\n\n## Outputs\n\nDue to GitHub's API limitations, the `response` content is truncated to 262,144 (2^18) characters so the complete, raw response is saved to `response-file`.\n\n| Name            | Description                                                     |\n| --------------- | --------------------------------------------------------------- |\n| `response`      | Response content from the inference request.                    |\n| `response-file` | File path containing the complete, raw response in JSON format. |\n| `payload`       | Body parameters of the inference request in JSON format.        |\n\n\u003c/br\u003e\n\n## Security\n\nView [security policy and reporting instructions](SECURITY.md).\n\n\u003e [!TIP]\n\u003e\n\u003e Pin your GitHub Action to a [commit SHA](https://docs.github.com/en/actions/security-guides/security-hardening-for-github-actions#using-third-party-actions \"Security hardening for GitHub Actions.\") to harden your CI/CD **pipeline security** against supply chain attacks.\n\n\u003c/br\u003e\n\n## Changelog\n\nView [all notable changes](https://github.com/op5dev/prompt-ai/releases \"Releases.\") to this project in [Keep a Changelog](https://keepachangelog.com \"Keep a Changelog.\") format, which adheres to [Semantic Versioning](https://semver.org \"Semantic Versioning.\").\n\n\u003e [!TIP]\n\u003e\n\u003e All forms of **contribution are very welcome** and deeply appreciated for fostering open-source projects.\n\u003e\n\u003e - [Create a PR](https://github.com/op5dev/prompt-ai/pulls \"Create a pull request.\") to contribute changes you'd like to see.\n\u003e - [Raise an issue](https://github.com/op5dev/prompt-ai/issues \"Raise an issue.\") to propose changes or report unexpected behavior.\n\u003e - [Open a discussion](https://github.com/op5dev/prompt-ai/discussions \"Open a discussion.\") to discuss broader topics or questions.\n\u003e - [Become a stargazer](https://github.com/op5dev/prompt-ai/stargazers \"Become a stargazer.\") if you find this project useful.\n\n\u003c/br\u003e\n\n## License\n\n- This project is licensed under the **permissive** [Apache License 2.0](LICENSE \"Apache License 2.0.\").\n- All works herein are my own, shared of my own volition, and [contributors](https://github.com/op5dev/prompt-ai/graphs/contributors \"Contributors.\").\n- Copyright 2016-present [Rishav Dhar](https://rdhar.dev \"Rishav Dhar's profile.\") — All wrongs reserved.\n","funding_links":["https://github.com/sponsors/rdhar","https://github.com/sponsors/op5dev"],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FOP5dev%2FPrompt-AI","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FOP5dev%2FPrompt-AI","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FOP5dev%2FPrompt-AI/lists"}