{"id":13570159,"url":"https://github.com/explainX/explainx","last_synced_at":"2025-04-04T06:31:56.259Z","repository":{"id":37632322,"uuid":"272729720","full_name":"explainX/explainx","owner":"explainX","description":"Explainable AI framework for data scientists. Explain \u0026 debug any blackbox machine learning model with a single line of code. We are looking for co-authors to take this project forward. Reach out @ ms8909@nyu.edu","archived":false,"fork":false,"pushed_at":"2024-08-21T16:55:05.000Z","size":64229,"stargazers_count":415,"open_issues_count":6,"forks_count":54,"subscribers_count":10,"default_branch":"master","last_synced_at":"2024-10-30T00:37:27.358Z","etag":null,"topics":["aws-sagemaker","bias","blackbox","explainability","explainable-ai","explainable-artificial-intelligence","explainable-ml","explainx","interpretability","interpretable-ai","interpretable-machine-learning","machine-learning","machine-learning-interpretability","scikit-learn","transparency","xai"],"latest_commit_sha":null,"homepage":"","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/explainX.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":".github/FUNDING.yml","license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null},"funding":{"github":null,"patreon":"explainxai","open_collective":null,"ko_fi":null,"tidelift":null,"community_bridge":null,"liberapay":null,"issuehunt":null,"otechie":null,"custom":null}},"created_at":"2020-06-16T14:27:15.000Z","updated_at":"2024-10-16T11:53:14.000Z","dependencies_parsed_at":"2024-01-13T11:58:00.672Z","dependency_job_id":"18a831d2-917d-4adc-a1ec-80024e82e881","html_url":"https://github.com/explainX/explainx","commit_stats":{"total_commits":177,"total_committers":5,"mean_commits":35.4,"dds":0.423728813559322,"last_synced_commit":"d40df938db4ccad5d38d319098f98d82ff31a794"},"previous_names":[],"tags_count":21,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/explainX%2Fexplainx","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/explainX%2Fexplainx/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/explainX%2Fexplainx/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/explainX%2Fexplainx/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/explainX","download_url":"https://codeload.github.com/explainX/explainx/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":246989746,"owners_count":20865331,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["aws-sagemaker","bias","blackbox","explainability","explainable-ai","explainable-artificial-intelligence","explainable-ml","explainx","interpretability","interpretable-ai","interpretable-machine-learning","machine-learning","machine-learning-interpretability","scikit-learn","transparency","xai"],"created_at":"2024-08-01T14:00:49.033Z","updated_at":"2025-04-04T06:31:56.225Z","avatar_url":"https://github.com/explainX.png","language":"Jupyter Notebook","readme":"# explainX: Explainable AI Framework for Data Scientists\r\n\u003cimg src=\"explainx_logo.png\" align=\"right\" width=\"150\"/\u003e\r\n\r\n### We are looking for co-authors to take this project forward. Reach out @ ms8909@nyu.edu\r\n#### ExplainX is a model explainability/interpretability framework for data scientists and business users.\r\n\r\n[![Supported Python versions](https://img.shields.io/badge/python-3.6%20|%203.7|%203.8-brightgreen.svg)](https://pypi.org/project/explainx/)\r\n[![Downloads](https://pepy.tech/badge/explainx)](https://pepy.tech/project/explainx)\r\n![Maintenance](https://img.shields.io/maintenance/yes/2020?style=flat-square)\r\n[![Website](https://img.shields.io/website?)]()\r\n\r\n\r\nUse explainX to understand overall model behavior, explain the \"why\" behind model predictions, remove biases and create convincing explanations for your business stakeholders. [![Tweet](https://img.shields.io/twitter/url/http/shields.io.svg?style=social)](https://twitter.com/intent/tweet?text=Explain%20any%20black-box%20Machine%20Learning%20model%20in%20just%20one%20line%20of%20code%21\u0026hashtags=xai,explainable_ai,explainable_machine_learning,trust_in_ai,transparent_ai)\r\n\r\n\u003cimg width=\"1000\" src=\"rf_starter_example.png\" alt=\"explainX AI explainable AI library\"\u003e\r\n\r\n\r\n#### Why we need model explainability \u0026 interpretibility?\r\n\r\nEssential for:\r\n1. Explaining model predictions\r\n2. Debugging models\r\n3. Detecting biases in data\r\n4. Gaining trust of business users\r\n5. Successfully deploying AI solution\r\n\r\n#### What questions can we answer with explainX?\r\n\r\n1. Why did my model make a mistake? \r\n2. Is my model biased? If yes, where?\r\n3. How can I understand and trust the model's decisions?\r\n4. Does my model satisfy legal \u0026 regulatory requirements?\r\n\r\n#### We have deployed the app on our server so you can play around with the dashboard. Check it out:\r\n\r\nDashboard Demo: http://3.128.188.55:8080/\r\n\r\n# Getting Started\r\n\r\n## Installation\r\n\r\nPython 3.5+ | Linux, Mac, Windows\r\n\r\n```sh\r\npip install explainx\r\n```\r\n\r\nTo download on Windows, please install [Microsoft C++ Build Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/) first and then install the explainX package via ``` pip ```\r\n\r\n## Installation on the cloud\r\nIf you are using a notebook instance on the cloud (AWS SageMaker, Colab, Azure), please follow our step-by-step guide to install \u0026 run explainX cloud. \r\n\r\n\r\n## Usage (Example)\r\nAfter successfully installing explainX, open up your Python IDE of Jupyter Notebook and simply follow the code below to use it:\r\n\r\n1. Import required module.\r\n\r\n```python\r\n\r\nfrom explainx import * \r\nfrom sklearn.ensemble import RandomForestClassifier\r\nfrom sklearn.model_selection import train_test_split\r\n\r\n```\r\n\r\n2. Load and split your dataset into x_data and y_data\r\n\r\n```python\r\n\r\n#Load Dataset: X_Data, Y_Data \r\n#X_Data = Pandas DataFrame\r\n#Y_Data = Numpy Array or List\r\n\r\nX_data,Y_data = explainx.dataset_heloc()\r\n\r\n```\r\n\r\n3. Split dataset into training \u0026 testing. \r\n\r\n``` python\r\n\r\nX_train, X_test, Y_train, Y_test = train_test_split(X_data,Y_data, test_size=0.3, random_state=0)\r\n\r\n```\r\n\r\n4. Train your model.\r\n\r\n```python\r\n\r\n# Train a RandomForest Model\r\nmodel = RandomForestClassifier()\r\nmodel.fit(X_train, Y_train)\r\n\r\n```\r\n\r\nAfter you're done training the model, you can either access the complete explainability dashboard or access individual techniques.\r\n\r\n\r\n## Complete Explainability Dashboard\r\n\r\nTo access the entire dashboard with all the explainability techniques under one roof, follow the code down below. It is great for sharing your work with your peers and managers in an interactive and easy to understand way. \r\n\r\n5.1. Pass your model and dataset into the explainX function:\r\n\r\n```python\r\nexplainx.ai(X_test, Y_test, model, model_name=\"randomforest\")\r\n```\r\n\r\n5.2. Click on the dashboard link to start exploring model behavior:\r\n\r\n```python\r\n\r\nApp running on https://127.0.0.1:8080\r\n\r\n```\r\n\r\n\r\n## Explainability Modules\r\n\r\nIn this latest release, we have also given the option to use explainability techniques individually. This will allow the user to choose technique that fits their personal AI use case. \r\n\r\n6.1. Pass your model, X_Data and Y_Data into the explainx_modules function. \r\n\r\n```python\r\n\r\nexplainx_modules.ai(model, X_test, Y_test)\r\n\r\n```\r\nAs an upgrade, we have eliminated the need to pass in the model name as explainX is smart enough to identify the model type and problem type i.e. classification or regression, by itself. \r\n\r\nYou can access multiple modules:\r\n\r\nModule 1: Dataframe with Predictions\r\n```python\r\n\r\nexplainx_modules.dataframe_graphing()\r\n\r\n```\r\n\r\nModule 2: Model Metrics\r\n```python\r\n\r\nexplainx_modules.metrics()\r\n\r\n```\r\n\r\nModule 3: Global Level SHAP Values\r\n```python\r\n\r\nexplainx_modules.shap_df()\r\n\r\n```\r\n\r\nModule 4: What-If Scenario Analysis (Local Level Explanations)\r\n```python\r\n\r\nexplainx_modules.what_if_analysis()\r\n\r\n```\r\n\r\nModule 5: Partial Dependence Plot \u0026 Summary Plot\r\n```python\r\n\r\nexplainx_modules.feature_interactions()\r\n\r\n```\r\n\r\nModule 6: Model Performance Comparison (Cohort Analysis)\r\n```python\r\n\r\nexplainx_modules.cohort_analysis()\r\n\r\n```\r\n\r\nTo access the modules within your jupyter notebook as IFrames, just pass the \u003cb\u003emode='inline'\u003c/b\u003e argument. \r\n\r\n\r\n## Cloud Installation\r\n\r\n**If you are running explainX on the cloud e.g., AWS Sagemaker?** **https://0.0.0.0:8080** will not work.\r\n\r\nAfter installation is complete, just open your **terminal** and run the following command.\r\n```jupyter\r\n\r\nlt -h \"https://serverless.social\" -p [port number]\r\n\r\n```\r\n```jupyter\r\n\r\nlt -h \"https://serverless.social\" -p 8080\r\n\r\n```\r\n\r\n\u003cimg width=\"1000\" src=\"demo-explainx-with-sound.gif\" alt=\"explainX ai explainable ai\"\u003e\r\n\r\n\r\n\r\n## Walkthough Video Tutorial\r\n\r\nPlease click on the image below to load the tutorial:\r\n\r\n[![here](https://github.com/explainX/explainx/blob/master/explain_video_img.png)](https://youtu.be/CDMpOismME8)  \r\n\r\n(Note: Please manually set it to 720p or greater to have the text appear clearly)\r\n\r\n\r\n\r\n## Supported Techniques\r\n\r\n|Interpretability Technique | Status |\r\n|--|--|\r\n|SHAP Kernel Explainer| Live |\r\n|SHAP Tree Explainer| Live |\r\n|What-if Analysis| Live |\r\n|Model Performance Comparison | Live |\r\n|Partial Dependence Plot| Live |\r\n|Surrogate Decision Tree | Coming Soon |\r\n|Anchors | Coming Soon |\r\n|Integrated Gradients (IG)| Coming Soon |\r\n\r\n\r\n\r\n## Main Models Supported\r\n\r\n| No. | Model Name | Status |\r\n|--|--|--|\r\n|1. | Catboost | Live|\r\n|2. | XGboost==1.0.2 | Live|\r\n|3. | Gradient Boosting Regressor| Live|\r\n|4. | RandomForest Model| Live|\r\n|5. | SVM|Live|\r\n|6. | KNeighboursClassifier| Live\r\n|7. | Logistic Regression| Live |\r\n|8. | DecisionTreeClassifier|Live |\r\n|9. | All Scikit-learn Models|Live |\r\n|10.| Neural Networks|Live |\r\n|11.| H2O.ai AutoML | Live |\r\n|12.| TensorFlow Models | Coming Soon |\r\n|13.| PyTorch Models | Coming Soon |\r\n\r\n\r\n\r\n## Contributing\r\nPull requests are welcome. In order to make changes to explainx, the ideal approach is to fork the repository then clone the fork locally.\r\n\r\nFor major changes, please open an issue first to discuss what you would like to change.\r\nPlease make sure to update tests as appropriate.\r\n\r\n## Report Issues\r\n\r\nPlease help us by [reporting any issues](https://github.com/explainX/explainx/issues/new) you may have while using explainX.\r\n\r\n## License\r\n[MIT](https://choosealicense.com/licenses/mit/)\r\n","funding_links":["https://patreon.com/explainxai"],"categories":["Python Libraries(sort in alphabeta order)","模型的可解释性","Jupyter Notebook","Uncategorized","其他_机器学习与深度学习","Technical Resources","Code"],"sub_categories":["Evaluation methods","Uncategorized","Open Source/Access Responsible AI Software Packages"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FexplainX%2Fexplainx","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FexplainX%2Fexplainx","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FexplainX%2Fexplainx/lists"}