Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/confident-ai/deepeval
The LLM Evaluation Framework
https://github.com/confident-ai/deepeval
evaluation-framework evaluation-metrics llm-evaluation llm-evaluation-framework llm-evaluation-metrics
Last synced: about 2 months ago
JSON representation
The LLM Evaluation Framework
- Host: GitHub
- URL: https://github.com/confident-ai/deepeval
- Owner: confident-ai
- License: apache-2.0
- Created: 2023-08-10T05:35:04.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-05-21T03:18:33.000Z (4 months ago)
- Last Synced: 2024-05-21T06:53:42.429Z (4 months ago)
- Topics: evaluation-framework, evaluation-metrics, llm-evaluation, llm-evaluation-framework, llm-evaluation-metrics
- Language: Python
- Homepage: https://docs.confident-ai.com/
- Size: 27.1 MB
- Stars: 1,929
- Watchers: 15
- Forks: 138
- Open Issues: 46
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE.md
Awesome Lists containing this project
- awesome - confident-ai/deepeval - The LLM Evaluation Framework (Python)
- awesome-ChatGPT-repositories - deepeval - The Evaluation Framework for LLMs (NLP)
- awesome-generative-ai - confident-ai/deepeval
- awesome-chatgpt - confident-ai/deepeval - DeepEval is a Python library that provides an evaluation framework for LLM applications, allowing for unit testing and performance evaluation based on various metrics. (SDK, Libraries, Frameworks / Python)
- awesome-production-machine-learning - DeepEval - ai/deepeval.svg?style=social) - DeepEval is a simple-to-use, open-source evaluation framework for LLM applications. (Evaluation and Observability)
- Awesome-LLM-RAG-Application - deepeval