{"id":13546178,"url":"https://github.com/explodinggradients/ragas","last_synced_at":"2025-08-22T12:24:43.152Z","repository":{"id":164859350,"uuid":"637924634","full_name":"explodinggradients/ragas","owner":"explodinggradients","description":"Supercharge Your LLM Application Evaluations 🚀","archived":false,"fork":false,"pushed_at":"2025-08-15T09:02:46.000Z","size":45090,"stargazers_count":10337,"open_issues_count":431,"forks_count":1025,"subscribers_count":40,"default_branch":"main","last_synced_at":"2025-08-15T10:11:31.333Z","etag":null,"topics":["evaluation","llm","llmops"],"latest_commit_sha":null,"homepage":"https://docs.ragas.io","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/explodinggradients.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2023-05-08T17:48:04.000Z","updated_at":"2025-08-15T09:02:49.000Z","dependencies_parsed_at":null,"dependency_job_id":"3c01a525-1452-4399-b2b5-be689e71e945","html_url":"https://github.com/explodinggradients/ragas","commit_stats":null,"previous_names":[],"tags_count":76,"template":false,"template_full_name":null,"purl":"pkg:github/explodinggradients/ragas","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/explodinggradients%2Fragas","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/explodinggradients%2Fragas/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/explodinggradients%2Fragas/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/explodinggradients%2Fragas/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/explodinggradients","download_url":"https://codeload.github.com/explodinggradients/ragas/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/explodinggradients%2Fragas/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":271636052,"owners_count":24794147,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-08-22T02:00:08.480Z","response_time":65,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["evaluation","llm","llmops"],"created_at":"2024-08-01T12:00:33.054Z","updated_at":"2025-08-22T12:24:38.125Z","avatar_url":"https://github.com/explodinggradients.png","language":"Python","readme":"\u003ch1 align=\"center\"\u003e\n  \u003cimg style=\"vertical-align:middle\" height=\"200\"\n  src=\"./docs/_static/imgs/logo.png\"\u003e\n\u003c/h1\u003e\n\u003cp align=\"center\"\u003e\n  \u003ci\u003eSupercharge Your LLM Application Evaluations 🚀\u003c/i\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n    \u003ca href=\"https://github.com/explodinggradients/ragas/releases\"\u003e\n        \u003cimg alt=\"GitHub release\" src=\"https://img.shields.io/github/release/explodinggradients/ragas.svg\"\u003e\n    \u003c/a\u003e\n    \u003ca href=\"https://www.python.org/\"\u003e\n            \u003cimg alt=\"Build\" src=\"https://img.shields.io/badge/Made%20with-Python-1f425f.svg?color=purple\"\u003e\n    \u003c/a\u003e\n    \u003ca href=\"https://github.com/explodinggradients/ragas/blob/master/LICENSE\"\u003e\n        \u003cimg alt=\"License\" src=\"https://img.shields.io/github/license/explodinggradients/ragas.svg?color=green\"\u003e\n    \u003c/a\u003e\n    \u003ca href=\"https://pypi.org/project/ragas/\"\u003e\n        \u003cimg alt=\"Open In Colab\" src=\"https://img.shields.io/pypi/dm/ragas\"\u003e\n    \u003c/a\u003e\n    \u003ca href=\"https://discord.gg/5djav8GGNZ\"\u003e\n        \u003cimg alt=\"discord-invite\" src=\"https://dcbadge.vercel.app/api/server/5djav8GGNZ?style=flat\"\u003e\n    \u003c/a\u003e\n\u003c/p\u003e\n\n\u003ch4 align=\"center\"\u003e\n    \u003cp\u003e\n        \u003ca href=\"https://docs.ragas.io/\"\u003eDocumentation\u003c/a\u003e |\n        \u003ca href=\"#fire-quickstart\"\u003eQuick start\u003c/a\u003e |\n        \u003ca href=\"https://discord.gg/5djav8GGNZ\"\u003eJoin Discord\u003c/a\u003e \n    \u003cp\u003e\n\u003c/h4\u003e\n\nObjective metrics, intelligent test generation, and data-driven insights for LLM apps\n\nRagas is your ultimate toolkit for evaluating and optimizing Large Language Model (LLM) applications. Say goodbye to time-consuming, subjective assessments and hello to data-driven, efficient evaluation workflows.\nDon't have a test dataset ready? We also do production-aligned test set generation.\n\n## Key Features\n\n- 🎯 Objective Metrics: Evaluate your LLM applications with precision using both LLM-based and traditional metrics.\n- 🧪 Test Data Generation: Automatically create comprehensive test datasets covering a wide range of scenarios.\n- 🔗 Seamless Integrations: Works flawlessly with popular LLM frameworks like LangChain and major observability tools.\n- 📊 Build feedback loops: Leverage production data to continually improve your LLM applications.\n\n## :shield: Installation\n\nPypi: \n\n```bash\npip install ragas\n```\n\nAlternatively, from source:\n\n```bash\npip install git+https://github.com/explodinggradients/ragas\n```\n\n## :fire: Quickstart\n\n### Evaluate your RAG with Ragas metrics\n\nThis is 4 main lines:\n\n```python\nfrom ragas.metrics import LLMContextRecall, Faithfulness, FactualCorrectness\nfrom langchain_openai.chat_models import ChatOpenAI\nfrom ragas.llms import LangchainLLMWrapper\n\nevaluator_llm = LangchainLLMWrapper(ChatOpenAI(model=\"gpt-4o\"))\nmetrics = [LLMContextRecall(), FactualCorrectness(), Faithfulness()]\nresults = evaluate(dataset=eval_dataset, metrics=metrics, llm=evaluator_llm)\n```\n\nFind the complete RAG Evaluation Quickstart here: [https://docs.ragas.io/en/latest/getstarted/rag_evaluation/](https://docs.ragas.io/en/latest/getstarted/rag_evaluation/)\n\n\u003cdetails\u003e\n\u003csummary\u003e🖱️Click to see preview of RESULTS\u003c/summary\u003e\n\n| user_input | retrieved_contexts | response | reference | context_recall | factual_correctness | faithfulness |\n|------------|---------------------|----------|-----------|-----------------|---------------------|---------------|\n| What are the global implications of the USA Supreme Court ruling on abortion? | \"- In 2022, the USA Supreme Court ... - The ruling has created a chilling effect ...\" | The global implications ... Here are some potential implications: | The global implications ... Additionally, the ruling has had an impact beyond national borders ... | 1 | 0.47 | 0.516129 |\n| Which companies are the main contributors to GHG emissions ... ? | \"- Fossil fuel companies ... - Between 2010 and 2020, human mortality ...\" | According to the Carbon Majors database ... Here are the top contributors: | According to the Carbon Majors database ... Additionally, between 2010 and 2020, human mortality ... | 1 | 0.11 | 0.172414 |\n| Which private companies in the Americas are the largest GHG emitters ... ? | \"The private companies responsible ... The largest emitter amongst state-owned companies ...\" | According to the Carbon Majors database, the largest private companies ... | The largest private companies in the Americas ... | 1 | 0.26 | 0 |\n\u003c/details\u003e\n\n### Generate a test dataset for comprehensive RAG evaluation\n\nWhat if you don't have the data for folks asking questions when they interact with your RAG system? \n\nRagas can help by generating [synthetic test set generation](https://docs.ragas.io/en/latest/getstarted/rag_testset_generation/) -- where you can seed it with your data and control the difficulty, variety, and complexity. \n\n## 🫂 Community\n\nIf you want to get more involved with Ragas, check out our [discord server](https://discord.gg/5qGUJ6mh7C). It's a fun community where we geek out about LLM, Retrieval, Production issues, and more.\n\n## Contributors\n\n```yml\n+----------------------------------------------------------------------------+\n|     +----------------------------------------------------------------+     |\n|     | Developers: Those who built with `ragas`.                      |     |\n|     | (You have `import ragas` somewhere in your project)            |     |\n|     |     +----------------------------------------------------+     |     |\n|     |     | Contributors: Those who make `ragas` better.       |     |     |\n|     |     | (You make PR to this repo)                         |     |     |\n|     |     +----------------------------------------------------+     |     |\n|     +----------------------------------------------------------------+     |\n+----------------------------------------------------------------------------+\n```\n\nWe welcome contributions from the community! Whether it's bug fixes, feature additions, or documentation improvements, your input is valuable.\n\n1. Fork the repository\n2. Create your feature branch (git checkout -b feature/AmazingFeature)\n3. Commit your changes (git commit -m 'Add some AmazingFeature')\n4. Push to the branch (git push origin feature/AmazingFeature)\n5. Open a Pull Request\n\n## 🔍 Open Analytics\nAt Ragas, we believe in transparency. We collect minimal, anonymized usage data to improve our product and guide our development efforts.\n\n✅ No personal or company-identifying information\n\n✅ Open-source data collection [code](./src/ragas/_analytics.py)\n\n✅ Publicly available aggregated [data](https://github.com/explodinggradients/ragas/issues/49)\n\nTo opt-out, set the `RAGAS_DO_NOT_TRACK` environment variable to `true`.\n","funding_links":[],"categories":["Tools","🤖 LLM \u0026 Chatbot Testing","🛠️ Popular Open-Source Libraries for LLM Development","📊 Evaluation \u0026 Benchmarking","Open-source LLM-backed app evaluation products","Azure Cognitive Search \u0026 OpenAI","大型语言模型（LLM）排行榜","Python","🧺 Curated catalog","Evaluation","Retrieval Augmented Generation (RAG) Datasets \u003ca id=\"retrieval-augmented-generation-rag-datasets\"\u003e\u003c/a\u003e","\u003cspan id=\"game\"\u003eGame (World Model \u0026 Agent)\u003c/span\u003e","others","A01_文本生成_文本对话","Evaluation \u0026 Benchmarking","LLM Evaluation:","Evaluation and Monitoring","llm","知识库 RAG","NLP","RAG","LLM Evaluation Framework","\u003ca id=\"tools\"\u003e\u003c/a\u003e🛠️ Tools","语言资源库","Tools \u0026 Platforms","RAG Benchmarks","What's New","Evaluation Frameworks","Orchestration","Tools and Code","Evaluation, Observability \u0026 Safety","Advanced Techniques","Evaluation Metrics and Benchmarks"],"sub_categories":["Chatbots","3. The Enterprise / High-Scale Stack (The 1%)","LLM 评估工具","Evaluation","Evaluation Datasets \u003ca id=\"evaluation02\"\u003e\u003c/a\u003e","\u003cspan id=\"tool\"\u003eLLM (LLM \u0026 Tool)\u003c/span\u003e","大语言对话模型及数据","3. Pretraining","Model Testing \u0026 Validation","LLM Evaluations and Benchmarks","python","Open Source Frameworks","🆕 Recently Added (January 2026)","Vector Store Tutorials","Application Framework","Evaluators and Test Harnesses","LLM Evaluation Tools","Evaluation \u0026 Observability","Comparison Guides"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fexplodinggradients%2Fragas","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fexplodinggradients%2Fragas","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fexplodinggradients%2Fragas/lists"}