{"id":25828161,"url":"https://github.com/discretetom/defect","last_synced_at":"2025-06-26T00:04:51.859Z","repository":{"id":279030558,"uuid":"936009829","full_name":"DiscreteTom/defect","owner":"DiscreteTom","description":"Call LLMs in your pipeline, e.g. local git hook, GitHub Actions and more.","archived":false,"fork":false,"pushed_at":"2025-04-01T08:10:40.000Z","size":166,"stargazers_count":1,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-04-01T09:24:41.174Z","etag":null,"topics":["ai","cicd","codereview","git-hooks","github-actions","llm","pipeline","workflow"],"latest_commit_sha":null,"homepage":"","language":"Rust","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/DiscreteTom.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2025-02-20T11:37:29.000Z","updated_at":"2025-03-25T06:07:30.000Z","dependencies_parsed_at":"2025-03-11T12:24:41.604Z","dependency_job_id":"7866404b-ee5a-4d63-bfa5-7627241ab407","html_url":"https://github.com/DiscreteTom/defect","commit_stats":null,"previous_names":["discretetom/defect"],"tags_count":6,"template":false,"template_full_name":null,"purl":"pkg:github/DiscreteTom/defect","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DiscreteTom%2Fdefect","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DiscreteTom%2Fdefect/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DiscreteTom%2Fdefect/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DiscreteTom%2Fdefect/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/DiscreteTom","download_url":"https://codeload.github.com/DiscreteTom/defect/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DiscreteTom%2Fdefect/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":261973724,"owners_count":23238586,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","cicd","codereview","git-hooks","github-actions","llm","pipeline","workflow"],"created_at":"2025-02-28T17:23:59.293Z","updated_at":"2025-06-26T00:04:51.848Z","avatar_url":"https://github.com/DiscreteTom.png","language":"Rust","readme":"# Defect\n\n![license](https://img.shields.io/github/license/DiscreteTom/defect?style=flat-square)\n[![release](https://img.shields.io/github/v/release/DiscreteTom/defect?style=flat-square)](https://github.com/DiscreteTom/defect/releases/latest)\n\nCall LLMs in your pipeline, e.g. local [git hook](#git-hook), [GitHub Actions](#github-actions) and more.\n\n## Features\n\n- Single static-linked binary executable. You don't need to be familiar with Python or any framework.\n- Customizable prompt. Suitable for all kinds of tasks. See [examples](#prompt-engineering) below.\n- Support OpenAI (or compatible) and AWS Bedrock models.\n\n## Installation\n\nJust download the pre-built binary. E.g. for Linux x86_64 environment:\n\n```bash\nwget https://github.com/DiscreteTom/defect/releases/download/v0.3.3/defect-v0.3.3-x86_64-unknown-linux-musl.zip\nunzip defect-v0.3.3-x86_64-unknown-linux-musl.zip\nrm defect-v0.3.3-x86_64-unknown-linux-musl.zip\nchmod +x defect\n```\n\nSee the [latest GitHub releases](https://github.com/DiscreteTom/defect/releases/latest) page for more pre-built binaries.\n\n\u003e [!IMPORTANT]\n\u003e Remember to include the binary in your `PATH`.\n\n## Usage\n\n```bash\n$ defect --help\nCall LLMs in your pipeline, print the text response to stdout\n\nUsage: defect [OPTIONS] [PROMPT]\n\nArguments:\n  [PROMPT]  The prompt to use. If not provided or equal to \"-\", the program will read from stdin\n\nOptions:\n  -m, --model \u003cMODEL\u003e    The model to use [default: gpt-4o]\n  -s, --schema \u003cSCHEMA\u003e  The API schema to use [default: openai] [possible values: openai, bedrock]\n  -S, --system \u003cSYSTEM\u003e  Optional system instructions\n  -h, --help             Print help\n  -V, --version          Print version\n```\n\n### Choose a Model\n\n```bash\n# You can use `--model` to specify a custom OpenAI model.\n# Make sure you have set the \"OPENAI_API_KEY\" environment variable.\nexport OPENAI_API_KEY=\"\"\ndefect \"who are you\"\ndefect --model=gpt-4o \"who are you\"\n\n# For OpenAI compatible models, e.g. OpenRouter,\n# specify a custom endpoint via the \"OPENAI_API_BASE\" environment variable.\n# Make sure you have also set the \"OPENAI_API_KEY\" environment variable.\nexport OPENAI_API_BASE=\"https://openrouter.ai/api/v1\"\ndefect --model=deepseek/deepseek-r1 \"who are you\"\n\n# For AWS Bedrock models, set the `schema` option.\n# Make sure you have AWS credentials set up.\ndefect --schema bedrock --model=anthropic.claude-3-5-sonnet-20240620-v1:0 \"who are you\"\n```\n\n\u003e To set AWS credentials, you can [use environment variables](https://docs.aws.amazon.com/cli/v1/userguide/cli-configure-envvars.html), or [use AWS CLI](https://docs.aws.amazon.com/cli/v1/userguide/cli-configure-files.html#cli-configure-files-methods). See AWS official documentations for more information.\n\n## Prompt Engineering\n\nThe functionality of this tool is highly dependent on the prompt you provide.\n\nYou can construct complex prompts in bash scripts and pass it to the tool.\n\n```bash\n# insert the content of a file by using string interpolation in bash\nprompt=\"Summarize the file. \u003cfile\u003e`cat README.md`\u003c/file\u003e\"\ndefect \"$prompt\"\n```\n\nHere are some prompt examples.\n\n\u003cdetails open\u003e\n\u003csummary\u003eGeneral Code Review\u003c/summary\u003e\n\n```bash\nprompt=\"\nYou are a coding expert.\nReview the following code and give me suggestions.\n\n\u003ccode\u003e\n`cat main.rs`\n\u003c/code\u003e\n\"\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eCode Review with a Guideline\u003c/summary\u003e\n\nYour team may have a coding guideline. You can insert the guideline into the prompt.\n\n```bash\nprompt=\"\nYou are a coding expert.\nReview the code below following my provided guideline\nand give me suggestions.\n\n\u003cguideline\u003e\n`cat guideline.md 2\u003e/dev/null || curl https://your-server.com/guideline.md`\n\u003c/guideline\u003e\n\n\u003ccode\u003e\n`cat main.rs`\n\u003c/code\u003e\n\"\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eDocument Validation\u003c/summary\u003e\n\n```bash\n# review comments\nprompt=\"\nYou are a coding expert.\nReview the following code, ensure the comments adheres to the functionality of the code.\nIf not, provide suggestions to update the comments.\n\n\u003ccode\u003e\n`cat main.rs`\n\u003c/code\u003e\n\"\n\n# review documentation\nprompt=\"\nYou are a coding expert.\nReview the following code, ensure the provided documentation adheres to the functionality of the code.\nIf not, provide suggestions to update the documentation.\n\n\u003cdocumentation\u003e\n`cat documentation.md`\n\u003c/documentation\u003e\n\n\u003ccode\u003e\n`cat main.rs`\n\u003c/code\u003e\n\"\n```\n\n\u003c/details\u003e\n\n## Workflow\n\n### Abort a Workflow Execution\n\nWhen using this tool in a workflow or pipeline, you may want to abort the execution if the result is not as expected.\nThis requires your LLM output is structured in a way that you can parse it.\n\nA simple example:\n\n```bash\nprompt=\"\n...\n\nIf you think the code is correct, output 'OK' with nothing else.\nOtherwise, output suggestions in markdown format.\n\"\n\noutput=$(defect \"$prompt\")\n\nif [ \"$output\" != \"OK\" ]; then\n  echo \"$output\"\n  exit 1\nfi\n```\n\n### Webhook Callback\n\nIf your workflow execution is aborted by LLM, you may want to send a webhook callback to, for example, Slack, Lark, or your own issue tracker.\n\n```bash\n...\n\nif [ \"$output\" != \"OK\" ]; then\n  echo \"$output\"\n\n  commit=$(git rev-parse HEAD)\n  escaped_output=$(jq -n --arg val \"$output\" '$val')\n  body=\"{\\\"commit\\\":\\\"$commit\\\",\\\"feedback\\\":$escaped_output}\"\n  curl -X POST -d \"$body\" https://your-server.com/webhook\n\n  exit 1\nfi\n```\n\n### Git Hook\n\nYou can review your code locally via git hooks to reduce the feedback loop time, instead of waiting for CI/CD.\n\nAn example of `pre-commit` hook:\n\n```bash\n#!/bin/sh\n\nif git rev-parse --verify HEAD \u003e/dev/null 2\u003e\u00261\nthen\n  against=HEAD\nelse\n  # Initial commit: diff against an empty tree object\n  against=$(git hash-object -t tree /dev/null)\nfi\n\n# Get the diff of the staged files with 100 lines of context\ndiff=$(git diff --cached -U100 $against)\n\nprompt=\"\nYou are a coding expert.\nReview the following code diff and give me suggestions.\n\nIf you think the code is correct, output 'OK' with nothing else.\nOtherwise, output suggestions in markdown format.\n\n\u003cdiff\u003e\n$diff\n\u003c/diff\u003e\n\"\n\noutput=$(defect \"$prompt\")\n\nif [ \"$output\" != \"OK\" ]; then\n  echo \"$output\"\n  exit 1\nfi\n```\n\n### GitHub Actions\n\nThere is a [setup-defect](https://github.com/DiscreteTom/setup-defect) action available to simplify the setup process.\n\nReview each commit:\n\n```yaml\non:\n  push:\n    # only trigger on branches, not on tags\n    branches: \"**\"\n\njobs:\n  validate:\n    runs-on: ubuntu-latest\n    steps:\n      # fetch at least 2 commits so we can get the diff of the latest commit\n      - uses: actions/checkout@v4\n        with:\n          fetch-depth: 2\n\n      # get the diff of the latest commit with 100 lines of context\n      - name: Get the diff\n        run: |\n          git diff -U100 HEAD^ HEAD \u003e /tmp/diff\n          cat /tmp/diff\n\n      - name: Setup defect\n        uses: DiscreteTom/setup-defect@v0.1.1\n        with:\n          version: \"0.3.3\"\n\n      - name: Review the diff using defect\n        env:\n          OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}\n        run: |\n          diff=$(cat /tmp/diff)\n\n          prompt=\"\n          You are a coding expert.\n          Review the following code diff and give me suggestions.\n\n          If you think the code is correct, output 'OK' with nothing else.\n          Otherwise, output suggestions in markdown format.\n\n          \u003cdiff\u003e\n          $diff\n          \u003c/diff\u003e\n          \"\n\n          defect \"$prompt\" \u003e /tmp/suggestions\n          cat /tmp/suggestions\n\n      - name: Abort if suggestions are not empty\n        run: |\n          suggestions=$(cat /tmp/suggestions)\n\n          if [ \"$suggestions\" != \"OK\" ]; then\n            exit 1\n          fi\n```\n\nReview each PR (get the diff between the PR branch and the base branch):\n\n```yaml\non:\n  pull_request:\n    types: [opened, synchronize]\n\njobs:\n  validate:\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v4\n\n      - name: Fetch the origin/main branch\n        run: |\n          git fetch origin main:refs/remotes/origin/main\n\n      - name: Get the diff\n        run: |\n          git diff origin/main HEAD \u003e /tmp/diff\n          cat /tmp/diff\n\n      # ...\n```\n\n## Telemetry\n\nCurrently this project doesn't emit any telemetry data.\n\nTo collect the LLM usage data, you can use an AI gateway like [OpenRouter](https://openrouter.ai/), [LiteLLM](https://www.litellm.ai/) or [Kong](https://konghq.com/).\n\nTo collect the LLM response data, just send the response to your own server or write to your own database.\n\n```bash\n...\n\nif [ \"$output\" != \"OK\" ]; then\n  echo \"$output\"\n\n  # e.g. send telemetry with a webhook callback\n  curl -X POST -d \"some-data\" https://your-server.com/webhook\n\n  # e.g. convert output to metrics using LLM\n  # and save to AWS S3 so you can query using AWS Athena\n  metrics_prompt=\"\n  Below is a code review feedback,\n  tell me how many suggestions are there.\n  You should output a JSON object with the following format:\n\n  \u003cformat\u003e\n  {\\\"suggestions\\\": 123}\n  \u003c/format\u003e\n\n  You should only output the JSON object with nothing else.\n\n  \u003cfeedback\u003e\n  $output\n  \u003c/feedback\u003e\n  \"\n  metrics=$(defect \"$metrics_prompt\")\n  timestamp=$(date +%s)\n  echo \"$metrics\" \u003e $timestamp.json\n  date=$(date +'%Y/%m/%d')\n  author=$(git log -1 --pretty=format:'%an')\n  aws s3 cp $timestamp.json \"s3://your-bucket/suggestions/$date/$author/$timestamp.json\"\n\n  exit 1\nfi\n```\n\n## Cost Optimization\n\nTips to save your money:\n\n- Prevent unnecessary calls. E.g. in code review scenarios, use traditional formatters and linters before calling LLMs.\n- Use cheaper models for frequent tasks. E.g. review commits with DeepSeek models while review PRs with Claude models.\n\n## Demo\n\nSee [`defect-demo`](https://github.com/DiscreteTom/defect-demo) for a demo project.\n\n## Debug\n\nThis project uses [EnvFilter](https://docs.rs/tracing-subscriber/latest/tracing_subscriber/filter/struct.EnvFilter.html) to filter logs.\nAll the logs will be printed to `stderr`.\n\nHere is an example to enable debug logs:\n\n```bash\nexport RUST_LOG=\"defect=debug\"\n```\n\n## [CHANGELOG](./CHANGELOG.md)\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdiscretetom%2Fdefect","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdiscretetom%2Fdefect","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdiscretetom%2Fdefect/lists"}