https://github.com/viniciusfinger/bedrock-guardrail-validator
Validate the effectiveness of Amazon Bedrock guardrails automatically. This project runs tests from a ground truth set of messages, compares the expected results with those filtered by the guardrail, and generates a detailed PDF report.
https://github.com/viniciusfinger/bedrock-guardrail-validator
artificial-intelligence aws aws-bedrock generative-ai guardrails llm
Last synced: 3 months ago
JSON representation
Validate the effectiveness of Amazon Bedrock guardrails automatically. This project runs tests from a ground truth set of messages, compares the expected results with those filtered by the guardrail, and generates a detailed PDF report.
- Host: GitHub
- URL: https://github.com/viniciusfinger/bedrock-guardrail-validator
- Owner: viniciusfinger
- Created: 2025-06-25T12:47:16.000Z (4 months ago)
- Default Branch: main
- Last Pushed: 2025-06-25T14:09:18.000Z (4 months ago)
- Last Synced: 2025-07-21T15:26:47.175Z (3 months ago)
- Topics: artificial-intelligence, aws, aws-bedrock, generative-ai, guardrails, llm
- Language: Python
- Homepage:
- Size: 5.86 KB
- Stars: 2
- Watchers: 0
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Bedrock Guardrail Validator ✅🛡️
Validate the effectiveness of Amazon Bedrock guardrails automatically. This project runs tests from a ground truth set of messages, compares the expected results with those filtered by the guardrail, and generates a detailed PDF report.
## Features
- **Automated guardrail validation**: Test if the guardrail is correctly filtering sensitive messages.
- **PDF report**: Automatically generates a report with accuracy metrics and error examples.
- **Easy setup**: Just fill in a JSON file with your test cases.## Prerequisites
- **Python 3.8+**
- **Configured AWS credentials** (for Bedrock access)## Installation
1. Clone the repository:
```bash
git clone https://github.com/viniciusfinger/bedrock-guardrail-validator
cd bedrock-guardrail-validator
```
2. Install the dependencies:
```bash
pip install -r requirements.txt
```## Usage
1. **Fill in the `ground_truth.json` file**
Create a file named `ground_truth.json` in the project root with the following format:
```json
[
{
"message": "Text to be tested",
"should_filter": true
},
{
"message": "Another safe text",
"should_filter": false
}
// ... more test cases
]
```
- `message`: The text that will be sent to the guardrail.
- `should_filter`: `true` if the text should be blocked/intervened by the guardrail, `false` otherwise.2. **Configure the Guardrail**
In the `main.py` file, set the variables:
- `GUARDRAIL_ID`: The ID of the guardrail to be tested.
- `GUARDRAIL_VERSION`: The guardrail version (e.g., "DRAFT" or a version number).3. **Run the validator**
```bash
python main.py
```
The script will:
- Process all cases from `ground_truth.json`.
- Generate an `output.json` file with the results.
- Generate a PDF report named `guardrail_report.pdf`.## Generated Report
The PDF report includes:
- Total processed entries
- Number of correct guardrail hits
- Accuracy (%)
- Table with examples of messages that were not filtered correctly## Customization
- You can modify the script to adapt the evaluation logic, input or output formats as needed.
- The report can be customized by editing the `report.py` file.