Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/ashad001/truthinator
Because someone has to fact check the AI before it becomes "your homework"
https://github.com/ashad001/truthinator
Last synced: 5 days ago
JSON representation
Because someone has to fact check the AI before it becomes "your homework"
- Host: GitHub
- URL: https://github.com/ashad001/truthinator
- Owner: Ashad001
- Created: 2024-09-09T18:03:53.000Z (2 months ago)
- Default Branch: main
- Last Pushed: 2024-09-15T13:21:21.000Z (2 months ago)
- Last Synced: 2024-09-15T14:48:00.178Z (2 months ago)
- Language: Python
- Homepage:
- Size: 340 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# LLM Response Assessor & Generator (IN DEVELOPMENT)
So you gave GPT your assignment PDF, and now you’re stuck wondering if its response is actually any good? Well, congrats! This project is here to do the dirty work: assess the AI’s half-baked response, tell you how wrong it is, then *try* to fix it for you. Because, clearly, you have better things to do.## Features
1. **Automated Accuracy Check**: Because manually verifying LLM responses is so 2022.
2. **Response Generation**: The AI will give a new response if the first one sucked. (Spoiler: it probably did.)
3. **Repeat Until "Perfect"**: We’ll keep going until the AI gets it right, or you give up.## How It Works
- Load your PDF into a vector database.
- Paste the LLM’s response.
- We’ll score it (1-10), then rewrite it if needed. Rinse and repeat.## Setup
1. Clone:
```bash
git clone https://github.com/Ashad001/LLM-response-assessor.git
```
2. Install dependencies:
```bash
pip install -r requirements.txt
```
3. Add your `.env` file:
```
GROQ_API_KEY=your_groq_api_key
```
4. Run the script:
```bash
python main.py
```