https://github.com/yiouli/pixie-qa
Automated quality assurance for AI applications.
https://github.com/yiouli/pixie-qa
agent agent-skills ai ai-evals dev eval llm llm-testing qa skill testing
Last synced: 6 days ago
JSON representation
Automated quality assurance for AI applications.
- Host: GitHub
- URL: https://github.com/yiouli/pixie-qa
- Owner: yiouli
- License: mit
- Created: 2026-03-07T20:42:30.000Z (about 2 months ago)
- Default Branch: main
- Last Pushed: 2026-03-27T19:28:46.000Z (about 1 month ago)
- Last Synced: 2026-03-27T20:36:27.952Z (about 1 month ago)
- Topics: agent, agent-skills, ai, ai-evals, dev, eval, llm, llm-testing, qa, skill, testing
- Language: Python
- Homepage:
- Size: 1.14 MB
- Stars: 5
- Watchers: 0
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- Changelog: changelogs/async-handler-processing.md
- License: LICENSE
Awesome Lists containing this project
README
# Pixie-QA
[](https://skills.sh/github/awesome-copilot/eval-driven-dev)
[](https://badge.fury.io/py/pixie-qa)
[](https://discord.gg/7fmXQzFt)
## Agent skill for Evaluation Driven Development
Pixie-QA is an agent skill that let your coding agent to systematically improve the quality of your AI application with Evaluation Driven Development (EDD) approach. With the skill, your coding agent will carry out the evaluate->analyze->implement cycle for you.
## Why Pixie-QA?
You've probably spent a lot of time tweaking your implementation for your AI feature, re-testing the same inputs, and not being sure whether things actually got better.
You might have looked at evals products, but think they are not worth the hassle - they are good at giving you fancy metrics and dashboards, but provides little help on actually improving your application.
Pixie-QA takes a different approach, focusing on producing actionable insights — specific action items that you or your coding agent can investigate further or directly implement in your code.
And because Pixie-QA runs locally inside your codebase, your data stays private and you're not locked into another platform.
## Demo
[Demo Video](https://github.com/user-attachments/assets/74565bd2-a7fc-4f31-909d-9697642e033d)
## How it Works
The skill guides your coding agent (Claude Code, Cursor, GitHub Copilot, etc.) through a 6-step pipeline:
1. **Analyze the app** — The agent reads your codebase, identifies entry points, maps capabilities, and defines eval criteria based on real failure modes (not generic quality checklists).
2. **Instrument data boundaries** — Lightweight `wrap()` calls are added where your app reads external data (databases, APIs, caches) and where it produces output. This lets the eval harness inject controlled inputs and capture results — without changing your app's logic.
3. **Build a Runnable** — A thin adapter that lets the eval harness invoke your app the same way a real user would. Your app runs its real code path, makes real LLM calls — nothing is mocked.
4. **Define evaluators** — Each eval criterion maps to a scoring function: LLM-as-judge for semantic quality, deterministic checks for structural requirements, or custom evaluators for domain-specific rules.
5. **Build a dataset** — Test cases with realistic inputs, pre-captured external data, and expected behavior. Each entry specifies which evaluators to run and what passing looks like.
6. **Run `pixie test` and analyze** — The harness runs all entries concurrently, scores them, and the agent analyzes results: which entries failed, why, and what to fix — in the app or in the eval setup itself.
The output is a working eval pipelinem and detailed analysis + action plan that you or your coding agent can implement.
## Get Started
Add the skill to your coding agent:
```bash
npx skills add yiouli/pixie-qa
```
Then simply talk to your coding agent in your project, e.g:
- "Setup eval"
- "Improve my agent's output quality"
- "The AI response is wrong when ..., please fix"