https://github.com/open-evals/evals
Evals is a framework for evaluating OpenAI models and an open-source registry of benchmarks.
https://github.com/open-evals/evals
Last synced: 11 days ago
JSON representation
Evals is a framework for evaluating OpenAI models and an open-source registry of benchmarks.
- Host: GitHub
- URL: https://github.com/open-evals/evals
- Owner: open-evals
- License: mit
- Fork: true (openai/evals)
- Created: 2023-03-16T01:34:58.000Z (about 2 years ago)
- Default Branch: main
- Last Pushed: 2023-03-23T02:43:51.000Z (about 2 years ago)
- Last Synced: 2024-11-05T02:35:54.356Z (5 months ago)
- Language: Python
- Homepage:
- Size: 92.8 KB
- Stars: 19
- Watchers: 1
- Forks: 3
- Open Issues: 3
Awesome Lists containing this project
- Awesome_Multimodel_LLM - Open-evals - A framework extend openai's [Evals](https://github.com/openai/evals) for different language model. (Other useful resources)
- awesome-llm - Open-evals - 用于不同语言模型评估的扩展框架。 (其他资源 / LLM 评估工具)
- awesome-llm - Open-evals - 用于不同语言模型评估的扩展框架。 (其他资源 / LLM 评估工具)
- Awesome-LLM - Open-evals - A framework extend openai's [Evals](https://github.com/openai/evals) for different language model. (Miscellaneous)