Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/mattsavarino/rai-content-safety
Responsible AI simulations and evaluations using Azure Content Safety
https://github.com/mattsavarino/rai-content-safety
ai automation azure responsible-ai
Last synced: about 1 month ago
JSON representation
Responsible AI simulations and evaluations using Azure Content Safety
- Host: GitHub
- URL: https://github.com/mattsavarino/rai-content-safety
- Owner: mattsavarino
- License: mit
- Created: 2024-08-22T14:34:26.000Z (5 months ago)
- Default Branch: main
- Last Pushed: 2024-08-26T22:17:38.000Z (5 months ago)
- Last Synced: 2024-10-15T20:41:00.173Z (3 months ago)
- Topics: ai, automation, azure, responsible-ai
- Language: Jupyter Notebook
- Homepage:
- Size: 11.7 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Responsible AI: Content Safety
This repo helps automate responsible AI evaluations for content safety.
You can read more details in the [Azure documentation](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/develop/simulator-interaction-data).
## Prerequisites
* Python v3.12+
* Azure access## ☁️ Azure Setup
1. [Create Azure AI Studio resource](https://portal.azure.com/#create/Microsoft.AzureAIStudio) (note capacity below)
1. Create new project within AI Studio**Regional Capacity**
As of August 2024, the following regions work. [View status](https://learn.microsoft.com/en-us/azure/ai-studio/concepts/evaluation-metrics-built-in?tabs=warning#risk-and-safety-metrics)| Region | TPM |
|---|---|
| Sweden Central | 450k |
| France Central | 380k |
| UK South | 280k |
| East US 2 | 80k |## 🐍 Python Setup
1. Clone this repo, open a terminal, and change directory
1. Rename [.env.sample](.env.sample) to `.env`
1. Edit `.env` to add your keys for various Azure services
1. Create virtual environment: `python3 -m venv .venv`
1. Activate virtual environment:
* Windows: `.\.venv\Scripts\activate`
* Linux or macOS: `source ./.venv/bin/activate`
1. Install [required packages](./requirements.txt): `pip install -r requirements.txt`## 🌡️ Run Evaluations
1. Simply open and run [the Python notebooks](./notebooks/).
1. Update `call_endpoint()` function [here](./notebooks/01_content_safety_eval.ipynb).
* Call your custom API endpoint.
* Process API response to extract AI text.
3. Monitor the evaluation log and review generated eval files.
4. Generate summary of eval files [here](./notebooks/02_analyze_eval.ipynb).## ⚙️ Conclusion
* Be responsible with your AI.
* Continuously evaluate for content safety.
* [Learn more about responsible AI](https://www.microsoft.com/en-us/ai/responsible-ai)