Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/lexiestleszek/hallucination_checker
A python library to check your LLM outputs for hallucinations.
https://github.com/lexiestleszek/hallucination_checker
Last synced: 6 days ago
JSON representation
A python library to check your LLM outputs for hallucinations.
- Host: GitHub
- URL: https://github.com/lexiestleszek/hallucination_checker
- Owner: LexiestLeszek
- License: mit
- Created: 2024-11-05T10:43:43.000Z (15 days ago)
- Default Branch: main
- Last Pushed: 2024-11-05T10:57:56.000Z (15 days ago)
- Last Synced: 2024-11-05T11:44:01.257Z (15 days ago)
- Language: Python
- Size: 18.6 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# LLM Hallucination Checker
A robust and flexible library for detecting hallucinations in Large Language Model (LLM) responses.
## Features
- Multiple validation strategies: Strict, Moderate, and Relaxed
- Detailed categorization of hallucinations
- Customizable LLM function integration
- Comprehensive validation history and reporting## Quick Start
```python
from halucheck import HallucinationChecker, ValidationStrategy# Define your LLM function
def my_llm_function(prompt):
# Your LLM API call here
pass# Initialize the checker
checker = HallucinationChecker(my_llm_function)# Check for hallucinations
content = "The sky is green." # this is the LLM response
prompt = "Describe the sky." # this is the prompt you passed to LLM
is_hallucination = checker.check(content, prompt, strategy=ValidationStrategy.STRICT) # returns True / Falseprint(f"Contains hallucination: {is_hallucination}")
```## Detailed Usage
### Initialization
```python
from llm_hallucination_detector import HallucinationChecker, ValidationStrategy, LLMResponsechecker = HallucinationChecker(my_llm_function)
```### Checking for Hallucinations
```python
# Basic check
is_hallucination = checker.check(content, prompt)# Check with context
is_hallucination = checker.check(content, prompt, context="Some additional context")# Using different validation strategies
is_hallucination = checker.check(content, prompt, strategy=ValidationStrategy.MODERATE)
```### Advanced Usage with LLMResponse
```python
response = LLMResponse(content="The sky is green.", original_prompt="Describe the sky.")
is_hallucination = response.check_hallucination(my_llm_function, ValidationStrategy.STRICT)# Get detailed validation results
details = response.get_validation_details()
print(details)
```## Validation Strategies
- `STRICT`: Most rigorous checking. Flags even minor inconsistencies.
- `MODERATE`: Balanced approach. Allows for minor imprecisions.
- `RELAXED`: More lenient checking. Only flags major inconsistencies.## Hallucination Categories
- `FACTUAL_ERROR`
- `LOGICAL_INCONSISTENCY`
- `CONTEXT_DEVIATION`
- `UNSUPPORTED_CLAIM`
- `CONTRADICTORY_STATEMENT`## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## License
This project is licensed under the MIT License.