https://github.com/chigwell/llmtestr
A new package that helps developers integration-test AI and LLM applications by validating structured outputs. It takes a user's test scenario or prompt as input, sends it to an LLM, and uses pattern
https://github.com/chigwell/llmtestr
ai-powered-system-testing ai-testing automated-test-execution code-snippet-validation developer-tooling formatting-error-detection integration-testing json-schema-validation llm-validation output-consistency-enforcement output-content-verification pattern-matching prompt-driven-testing regression-detection response-format-checking schema-enforcement structured-output-verification tagged-response-validation test-automation test-scenario-input
Last synced: 3 months ago
JSON representation
A new package that helps developers integration-test AI and LLM applications by validating structured outputs. It takes a user's test scenario or prompt as input, sends it to an LLM, and uses pattern
- Host: GitHub
- URL: https://github.com/chigwell/llmtestr
- Owner: chigwell
- Created: 2025-12-21T13:56:40.000Z (4 months ago)
- Default Branch: main
- Last Pushed: 2025-12-21T13:56:47.000Z (4 months ago)
- Last Synced: 2025-12-23T04:56:06.136Z (4 months ago)
- Topics: ai-powered-system-testing, ai-testing, automated-test-execution, code-snippet-validation, developer-tooling, formatting-error-detection, integration-testing, json-schema-validation, llm-validation, output-consistency-enforcement, output-content-verification, pattern-matching, prompt-driven-testing, regression-detection, response-format-checking, schema-enforcement, structured-output-verification, tagged-response-validation, test-automation, test-scenario-input
- Language: Python
- Homepage: https://pypi.org/project/llmtestr/
- Size: 3.91 KB
- Stars: 1
- Watchers: 0
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# LLM Test Helper (`llmtestr`)
[](https://badge.fury.io/py/llmtestr)
[](https://opensource.org/licenses/MIT)
[](https://pepy.tech/project/llmtestr)
[](https://www.linkedin.com/in/eugene-evstafev-716669181/)
`llmtestr` is a Python package designed to assist developers in integration-testing AI and Language Model applications by validating structured outputs. It provides a simple interface to send a prompt or test scenario to an LLM, then verifies that the response matches predefined patterns using pattern matching mechanisms. This helps ensure that your LLM outputs adhere to expected formats such as code snippets, JSON structures, or tagged responses, making it easier to catch formatting errors, regressions, or inconsistencies during development and testing.
## Installation
Install the package via pip:
```bash
pip install llmtestr
```
## Usage
Here's a basic example of how to use `llmtestr` in your Python code:
```python
from llmtestr import llmtestr
response = llmtestr(user_input="Your test prompt here")
print(response)
```
## Function Parameters
- `user_input` (*str*): The prompt or test scenario you want to evaluate.
- `llm` (*Optional[BaseChatModel]*): An optional `langchain` LLM instance to use. If not provided, `llmtestr` defaults to using `ChatLLM7`.
- `api_key` (*Optional[str]*): Your API key for `LLM7`. If not provided, it will attempt to fetch from the environment variable `LLM7_API_KEY`.
## Underlying LLM
By default, `llmtestr` uses the `ChatLLM7` class from `langchain_llm7`, which you can find on PyPI: [langchain_llm7](https://pypi.org/project/langchain-llm7/). You can also pass your custom LLM instance based on the `langchain` interface for different providers like OpenAI, Anthropic, Google, etc.
### Examples:
**Using OpenAI:**
```python
from langchain_openai import ChatOpenAI
from llmtestr import llmtestr
llm = ChatOpenAI()
response = llmtestr(user_input="Test prompt", llm=llm)
```
**Using Anthropic:**
```python
from langchain_anthropic import ChatAnthropic
from llmtestr import llmtestr
llm = ChatAnthropic()
response = llmtestr(user_input="Test prompt", llm=llm)
```
**Using Google Generative AI:**
```python
from langchain_google_genai import ChatGoogleGenerativeAI
from llmtestr import llmtestr
llm = ChatGoogleGenerativeAI()
response = llmtestr(user_input="Test prompt", llm=llm)
```
## Rate Limits & API Keys
The default free tier for LLM7 provides sufficient rate limits for most development needs. To get higher limits, you can:
- Set the environment variable `LLM7_API_KEY`.
- Pass your API key directly:
```python
response = llmtestr(user_input="Test prompt", api_key="your_api_key")
```
Register for a free API key at [https://token.llm7.io/](https://token.llm7.io/).
## Support & Issues
For issues, bugs, or feature requests, please visit the GitHub repo issues page: [https://github.com/chigwell/llmtestr/issues](https://github.com/chigwell/llmtestr/issues).
## Author
Eugene Evstafev
Email: hi@euegne.plus
GitHub: [chigwell](https://github.com/chigwell)