Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/invariantlabs-ai/invariant
Helps you build better AI agents through debuggable unit testing
https://github.com/invariantlabs-ai/invariant
agents ai security
Last synced: about 1 month ago
JSON representation
Helps you build better AI agents through debuggable unit testing
- Host: GitHub
- URL: https://github.com/invariantlabs-ai/invariant
- Owner: invariantlabs-ai
- Created: 2024-05-08T08:57:47.000Z (9 months ago)
- Default Branch: main
- Last Pushed: 2024-12-17T12:01:24.000Z (about 2 months ago)
- Last Synced: 2024-12-17T12:33:25.797Z (about 2 months ago)
- Topics: agents, ai, security
- Language: Python
- Homepage: https://invariantlabs.ai
- Size: 6.96 MB
- Stars: 131
- Watchers: 8
- Forks: 10
- Open Issues: 7
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- Awesome-LLMSecOps - invariant - ai/invariant?style=social) | (Agentic security)
- awesome_ai_agents - Invariant - Helps you build better AI agents through debuggable unit testing (Building / Testing)
- awesome_ai_agents - Invariant - Helps you build better AI agents through debuggable unit testing (Building / Security)
README
Invariant
testing
Helps you build better AI agents through debuggable unit testing
[Documentation](https://explorer.invariantlabs.ai/docs/testing/)
Invariant `testing` is a lightweight library to write and run AI agent tests. It provides helpers and assertions that enable you to write robust tests for your agentic applications.
Using localized assertions, testing always points you to the exact part of the agent's behavior that caused a test to fail, making it easy to debug and resolve issues (think: stacktraces for agents).
## Installation
```
pip install invariant-ai
```## A quick example
The example below uses `extract(...)` to detect `locations` from messages. This uses the `gpt-4o` model from OpenAI.
Setup your OpenAI key as
```bash
export OPENAI_API_KEY=
```Code:
```python
# content of tests/test_weather.py
import invariant.testing.functional as F
from invariant.testing import Trace, assert_equalsdef test_weather():
# create a Trace object from your agent trajectory
trace = Trace(
trace=[
{"role": "user", "content": "What is the weather like in Paris?"},
{"role": "agent", "content": "The weather in London is 75°F and sunny."},
]
)# make assertions about the agent's behavior
with trace.as_context():
# extract the locations mentioned in the agent's response using OpenAI
locations = trace.messages()[-1]["content"].extract("locations")# assert that the agent responded about Paris and only Paris
assert_equals(1, F.len(locations),
"The agent should respond about one location only")assert_equals("Paris", locations[0], "The agent should respond about Paris")
```
**Execute it on the command line**:
```py
$ invariant test
________________________________ test_weather _________________________________
ERROR: 1 hard assertions failed:
# assert that the agent responded about Paris and only Paris
assert_equals(1, locations.len(),
"The agent should respond about one location only")
> assert_equals("Paris", locations[0], "The agent should respond about Paris")
________________________________________________________________________________ASSERTION FAILED: The agent should respond about Paris (expected: 'Paris', actual: 'London')
________________________________________________________________________________# role: "user"
# content: "What is the weather like in Paris?"
# },
# {
# role: "agent"
content: "The weather in London is 75°F and sunny."
# },
# ]
```
The test result precisely [localizes the failure in the provided agent trace](https://explorer.invariantlabs.ai/docs/testing/Writing_Tests/tests/).**Visual Test Viewer (Explorer):**
As an alternative to the command line, you can also [visualize test results](https://explorer.invariantlabs.ai/docs/testing/Running_Tests/Visual_Debugger/) on the [Invariant Explorer](https://explorer.invariantlabs.ai/):
```py
$ invariant test --push
```![image](https://github.com/user-attachments/assets/8305e202-0d63-435c-9e71-0988a6f9d24a)
Like the terminal output, the Explorer highlights the relevant ranges, but does so even more precisely, marking the exact words that caused the assertion to fail.
## Features
* Comprehensive `Trace` API for easily navigating and checking agent traces.
* Assertions library to check agent behavior, including fuzzy checkers such as _Levenshtein distance_, _semantic similarity_ and _LLM-as-a-judge_ pipelines.
* Full `pytest` compatibility for easy integration with existing test and CI/CD pipelines.
* Parameterized tests for testing multiple scenarios with a single test function.
* Visual test viewer for exploring large traces and debugging test failures in [Explorer](https://explorer.invariantlabs.ai)To learn more [read the documentation](https://explorer.invariantlabs.ai/docs/testing/)