{"id":32622855,"url":"https://github.com/artefactop/promptdev","last_synced_at":"2025-10-30T19:52:13.312Z","repository":{"id":313410184,"uuid":"1051236820","full_name":"artefactop/promptdev","owner":"artefactop","description":"A prompt evaluation framework that provides comprehensive testing for AI agents across multiple providers.","archived":false,"fork":false,"pushed_at":"2025-09-22T19:55:14.000Z","size":1081,"stargazers_count":2,"open_issues_count":0,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2025-09-30T21:12:14.711Z","etag":null,"topics":["ci-cd","evaluation-framework","llm","llm-eval","llm-evaluation","llm-evaluation-framework","prompt","prompt-engineering","prompt-toolkit","red-team","testing"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/artefactop.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2025-09-05T16:52:09.000Z","updated_at":"2025-09-22T19:48:33.000Z","dependencies_parsed_at":"2025-09-05T22:18:38.210Z","dependency_job_id":"cb9d1162-5e2c-4d5d-b010-7f522bd7fc86","html_url":"https://github.com/artefactop/promptdev","commit_stats":null,"previous_names":["artefactop/promptdev"],"tags_count":1,"template":false,"template_full_name":null,"purl":"pkg:github/artefactop/promptdev","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/artefactop%2Fpromptdev","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/artefactop%2Fpromptdev/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/artefactop%2Fpromptdev/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/artefactop%2Fpromptdev/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/artefactop","download_url":"https://codeload.github.com/artefactop/promptdev/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/artefactop%2Fpromptdev/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":281873520,"owners_count":26576262,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-10-30T02:00:06.501Z","response_time":61,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ci-cd","evaluation-framework","llm","llm-eval","llm-evaluation","llm-evaluation-framework","prompt","prompt-engineering","prompt-toolkit","red-team","testing"],"created_at":"2025-10-30T19:51:56.832Z","updated_at":"2025-10-30T19:52:13.306Z","avatar_url":"https://github.com/artefactop.png","language":"Python","readme":"# Promptdev\n\n[![Python 3.12+](https://img.shields.io/badge/python-3.12+-blue.svg?style=for-the-badge)](https://www.python.org/downloads/)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg?style=for-the-badge)](https://opensource.org/licenses/MIT)\n[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json\u0026style=for-the-badge)](https://github.com/astral-sh/ruff)\n[![CI](https://img.shields.io/github/actions/workflow/status/artefactop/promptdev/ci.yml?style=for-the-badge)](https://github.com/artefactop/promptdev/actions/workflows/ci.yml)\n[![codecov](https://img.shields.io/codecov/c/github/artefactop/promptdev?style=for-the-badge)](https://codecov.io/gh/artefactop/promptdev)\n[![Security](https://img.shields.io/github/actions/workflow/status/artefactop/promptdev/security.yml?style=for-the-badge)](https://github.com/artefactop/promptdev/actions/workflows/security.yml)\n\n\n`promptdev` is a prompt evaluation framework that provides comprehensive testing for AI agents across multiple providers.\n\n![Promptdev Demo](https://github.com/artefactop/promptdev/raw/main/docs/demo.gif)\n\n\u003e [!WARNING]\n\u003e\n\u003e promptdev is in preview and is not ready for production use.\n\u003e\n\u003e We're working hard to make it stable and feature-complete, but until then, expect to encounter bugs,\n\u003e missing features, and fatal errors.\n\n## Features\n\n- 🔒 **Type Safe** - Full Pydantic validation for inputs, outputs, and configurations  \n- 🤖 **PydanticAI Integration** - Native support for PydanticAI agents (in progress) and [evaluation framework](https://ai.pydantic.dev/evals/)\n- 📊 **Multi-Provider Testing** - Test across OpenAI, Together.ai, Ollama, Bedrock, and [more](https://ai.pydantic.dev/models/overview/)\n- ⚡ **Performance Optimized** - File-based caching with TTL for faster repeated evaluations\n- 📈 **Rich Reporting** - Beautiful console output with detailed failure analysis and provider comparisons\n- 🧪 **Promptfoo Compatible** - Works with (some) existing promptfoo YAML configs and datasets\n- 🎯 **Comprehensive Assertions** - Built-in evaluators plus custom Python assertion support\n\n## Quick Start\n\n### Installation\n\n#### From PyPI (alpha version)\n```bash\npip install promptdev --pre\n```\n\n#### From Source\n```bash\ngit clone https://github.com/artefactop/promptdev.git\ncd promptdev\npip install -e .\n```\n\n#### For Development\n```bash\ngit clone https://github.com/artefactop/promptdev.git\ncd promptdev\nuv sync\nuv run promptdev --help\n```\n\n### Basic Usage\n\n#### If installed via pip:\n```bash\n# Run evaluation (simple demo)\npromptdev eval examples/demo/config.yaml\n\n# Run evaluation (advanced example)\npromptdev eval examples/calendar_event_summary/config.yaml\n\n# Disable caching for a run\npromptdev eval examples/demo/config.yaml --no-cache\n\n# Export results\npromptdev eval examples/demo/config.yaml --output json\npromptdev eval examples/demo/config.yaml --output html\n\n# Validate configuration\npromptdev validate examples/demo/config.yaml\n\n# Cache management\npromptdev cache stats\npromptdev cache clear\n```\n\n#### If running from source:\n```bash\nuv run promptdev --help\n```\n\n## Assertion Types\n\nPromptdev supports a comprehensive set of evaluators for different testing scenarios:\n\n| Type            | Description                                                                                                                                                                         |\n|-----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `equals`        | Checks if the output exactly equals the provided value                                                                                                                              |\n| `contains`      | Checks if the output contains the expected output                                                                                                                                   |\n| `is_instance`   | Checks if the output is an instance of a type with the given name                                                                                                                   |\n| `max_duration`  | Checks if the execution time is under the specified maximum                                                                                                                         |\n| `is_json`       | Checks if the output is a valid JSON string (optional json schema validation)                                                                                                       |\n| `contains_json` | Checks if the output contains a valid json (optional json schema validation)                                                                                                        |\n| `python`        | [Promptfoo compatible](https://www.promptfoo.dev/docs/configuration/expected-outputs/python/#external-py) Allows you to provide a custom Python function to validate the LLM output |\n\n\n## Configuration\n\nPromptdev uses YAML configuration files compatible with [Promptfoo](https://www.promptfoo.dev/docs/configuration/reference/) format, but only a subset is available for now:\n\n### Promptfoo Compatibility\n\nPromptdev maintains compatibility with promptfoo configurations to ease migration:\n\n\u003e To migrate if you are using ids with format `provider:chat|completion:model`, just remove the middle part `provider:model`, promptdev only supports chat.\n\u003e\n\u003e Some provider name can change for example `togetherai` is now `togeher`. Refer to [pydantic_ai models](https://ai.pydantic.dev/models/overview/) for the full list.\n\n- **YAML configs** - Most promptfoo YAML configs work with minimal changes\n- **JSONL datasets** - Existing test datasets are fully supported\n- **Python assertions** - Custom `get_assert` functions work without modification\n- **JSON schemas** - Schema validation uses the same format\n\n\u003e [!WARNING]\n\u003e Promptdev can run custom Python assertions. While powerful, \n\u003e running arbitrary Python code always comes with [security issues](https://github.com/pydantic/pydantic-ai/pull/2808).\n\u003e Use this feature only with code you trust.\n\nExample of a Python assertion:\n\n```python\n# tests/data/python_assert.py\nfrom typing import Any\n\n\ndef get_assert(output:str, context:dict) -\u003e bool | float | dict[str, Any]:\n        \"\"\"Test assertion that checks if output contains 'success'.\"\"\"\n        return \"success\" in str(output).lower()\n```\n\n## Development\n\n```bash\n# Setup development environment\nuv sync\n\n# Run tests\nuv run pytest\n\n# Format and lint code\nuv run ruff check . --fix\nuv run ruff format .\n\n# Type checking\nuv run ty check\n```\n\n## Roadmap\n\n- [x] Core evaluation engine with PydanticAI integration\n- [x] Multi-provider support for major AI platforms\n- [x] YAML configuration loading with promptfoo compatibility\n- [x] Comprehensive assertion types (JSON schema, Python, LLM-based)\n- [x] File-based caching system with TTL support\n- [x] Rich console reporting with failure analysis\n- [x] Simple file disk cache\n- [x] Better integration with PydanticAI, do not reinvent the wheel\n- [x] Concurrent execution using PydanticAI natively, for faster large-scale evaluations\n- [ ] Code cleanup\n- [ ] Testing\n- [ ] Testing promptfoo files\n- [ ] Native support for PydanticAI agents\n- [ ] Add support to run multiple config files with one command\n- [ ] CI/CD integration helpers with change detection\n- [ ] SQLite persistence for evaluation history and analytics\n- [ ] Performance benchmarking and regression detection\n\n## Contributing\n\nWe welcome contributions! Here's how to get started:\n\n1. Fork the repository\n2. Create a feature branch: `git checkout -b feature/amazing-feature`\n3. Install development dependencies: `uv sync`\n4. Make your changes and add tests\n5. Run tests: `uv run pytest`\n6. Commit your changes: `git commit -m 'Add amazing feature'`\n7. Push to the branch: `git push origin feature/amazing-feature`\n8. Open a Pull Request\n\n\n### Code Style\n\nWe use `ruff` for code formatting and linting, `ty` for type checking, and `pytest` for testing. Please ensure your code follows these standards:\n\n```bash\nuv run ruff check .       # Lint code\nuv run ruff format .      # Format code\nuv run ty check           # Type checking\nuv run pytest             # Run tests\n```\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## Acknowledgments\n\n- Built on [PydanticAI](https://ai.pydantic.dev/) for type-safe AI agent development\n- Inspired by [promptfoo](https://github.com/promptfoo/promptfoo) for evaluation concepts\n- Uses [Rich](https://github.com/Textualize/rich) for beautiful console output","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fartefactop%2Fpromptdev","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fartefactop%2Fpromptdev","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fartefactop%2Fpromptdev/lists"}