{"id":15029320,"url":"https://github.com/trusted-ai/aif360","last_synced_at":"2026-04-07T04:31:55.286Z","repository":{"id":37052304,"uuid":"145761123","full_name":"Trusted-AI/AIF360","owner":"Trusted-AI","description":"A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.","archived":false,"fork":false,"pushed_at":"2024-12-10T03:08:36.000Z","size":6837,"stargazers_count":2576,"open_issues_count":207,"forks_count":875,"subscribers_count":90,"default_branch":"main","last_synced_at":"2025-05-11T08:38:25.114Z","etag":null,"topics":["ai","artificial-intelligence","bias","bias-correction","bias-detection","bias-finder","bias-reduction","codait","deep-learning","discrimination","fairness","fairness-ai","fairness-awareness-model","fairness-testing","ibm-research","ibm-research-ai","machine-learning","python","r","trusted-ai"],"latest_commit_sha":null,"homepage":"https://aif360.res.ibm.com/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Trusted-AI.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.rst","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2018-08-22T20:47:15.000Z","updated_at":"2025-05-10T14:08:13.000Z","dependencies_parsed_at":"2023-09-24T18:10:26.958Z","dependency_job_id":"50b7573f-aa08-4525-986f-1cafd58b71da","html_url":"https://github.com/Trusted-AI/AIF360","commit_stats":{"total_commits":357,"total_committers":74,"mean_commits":4.824324324324325,"dds":0.6106442577030813,"last_synced_commit":"e011686ba3d30cd21e04219e502fe33a74ab3817"},"previous_names":["ibm/aif360"],"tags_count":12,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Trusted-AI%2FAIF360","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Trusted-AI%2FAIF360/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Trusted-AI%2FAIF360/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Trusted-AI%2FAIF360/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Trusted-AI","download_url":"https://codeload.github.com/Trusted-AI/AIF360/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253540673,"owners_count":21924523,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","artificial-intelligence","bias","bias-correction","bias-detection","bias-finder","bias-reduction","codait","deep-learning","discrimination","fairness","fairness-ai","fairness-awareness-model","fairness-testing","ibm-research","ibm-research-ai","machine-learning","python","r","trusted-ai"],"created_at":"2024-09-24T20:10:18.623Z","updated_at":"2025-12-12T02:20:33.224Z","avatar_url":"https://github.com/Trusted-AI.png","language":"Python","readme":"# AI Fairness 360 (AIF360)\n\n[![Continuous Integration](https://github.com/Trusted-AI/AIF360/actions/workflows/ci.yml/badge.svg)](https://github.com/Trusted-AI/AIF360/actions/workflows/ci.yml)\n[![Documentation](https://readthedocs.org/projects/aif360/badge/?version=latest)](https://aif360.readthedocs.io/en/latest/?badge=latest)\n[![PyPI version](https://badge.fury.io/py/aif360.svg)](https://badge.fury.io/py/aif360)\n[![CRAN\\_Status\\_Badge](https://www.r-pkg.org/badges/version/aif360)](https://cran.r-project.org/package=aif360)\n\nThe AI Fairness 360 toolkit is an extensible open-source library containing techniques developed by the\nresearch community to help detect and mitigate bias in machine learning models throughout the AI application lifecycle. AI Fairness 360 package is available in both Python and R.\n\nThe AI Fairness 360 package includes\n1) a comprehensive set of metrics for datasets and models to test for biases,\n2) explanations for these metrics, and\n3) algorithms to mitigate bias in datasets and models.\nIt is designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging\nas finance, human capital management, healthcare, and education. We invite you to use it and improve it.\n\nThe [AI Fairness 360 interactive experience](https://aif360.res.ibm.com/data)\nprovides a gentle introduction to the concepts and capabilities. The [tutorials\nand other notebooks](./examples) offer a deeper, data scientist-oriented\nintroduction. The complete API is also available.\n\nBeing a comprehensive set of capabilities, it may be confusing to figure out\nwhich metrics and algorithms are most appropriate for a given use case. To\nhelp, we have created some [guidance\nmaterial](https://aif360.res.ibm.com/resources#guidance) that can be\nconsulted.\n\nWe have developed the package with extensibility in mind. This library is still\nin development. We encourage the contribution of your metrics, explainers, and\ndebiasing algorithms.\n\nGet in touch with us on [Slack](https://aif360.slack.com) (invitation\n[here](https://join.slack.com/t/aif360/shared_invite/zt-5hfvuafo-X0~g6tgJQ~7tIAT~S294TQ))!\n\n\n## Supported bias mitigation algorithms\n\n* Optimized Preprocessing ([Calmon et al., 2017](http://papers.nips.cc/paper/6988-optimized-pre-processing-for-discrimination-prevention))\n* Disparate Impact Remover ([Feldman et al., 2015](https://doi.org/10.1145/2783258.2783311))\n* Equalized Odds Postprocessing ([Hardt et al., 2016](https://papers.nips.cc/paper/6374-equality-of-opportunity-in-supervised-learning))\n* Reweighing ([Kamiran and Calders, 2012](http://doi.org/10.1007/s10115-011-0463-8))\n* Reject Option Classification ([Kamiran et al., 2012](https://doi.org/10.1109/ICDM.2012.45))\n* Prejudice Remover Regularizer ([Kamishima et al., 2012](https://rd.springer.com/chapter/10.1007/978-3-642-33486-3_3))\n* Calibrated Equalized Odds Postprocessing ([Pleiss et al., 2017](https://papers.nips.cc/paper/7151-on-fairness-and-calibration))\n* Learning Fair Representations ([Zemel et al., 2013](http://proceedings.mlr.press/v28/zemel13.html))\n* Adversarial Debiasing ([Zhang et al., 2018](https://arxiv.org/abs/1801.07593))\n* Meta-Algorithm for Fair Classification ([Celis et al., 2018](https://arxiv.org/abs/1806.06055))\n* Rich Subgroup Fairness ([Kearns, Neel, Roth, Wu, 2018](https://arxiv.org/abs/1711.05144))\n* Exponentiated Gradient Reduction ([Agarwal et al., 2018](https://arxiv.org/abs/1803.02453))\n* Grid Search Reduction ([Agarwal et al., 2018](https://arxiv.org/abs/1803.02453), [Agarwal et al., 2019](https://arxiv.org/abs/1905.12843))\n* Fair Data Adaptation ([Plečko and Meinshausen, 2020](https://www.jmlr.org/papers/v21/19-966.html), [Plečko et al., 2021](https://arxiv.org/abs/2110.10200))\n* Sensitive Set Invariance/Sensitive Subspace Robustness ([Yurochkin and Sun, 2020](https://arxiv.org/abs/2006.14168), [Yurochkin et al., 2019](https://arxiv.org/abs/1907.00020))\n\n## Supported fairness metrics\n\n* Comprehensive set of group fairness metrics derived from selection rates and error rates including rich subgroup fairness\n* Comprehensive set of sample distortion metrics\n* Generalized Entropy Index ([Speicher et al., 2018](https://doi.org/10.1145/3219819.3220046))\n* Differential Fairness and Bias Amplification ([Foulds et al., 2018](https://arxiv.org/pdf/1807.08362))\n* Bias Scan with Multi-Dimensional Subset Scan ([Zhang, Neill, 2017](https://arxiv.org/abs/1611.08292))\n\n## Setup\n\n### R\n\n``` r\ninstall.packages(\"aif360\")\n```\n\nFor more details regarding the R setup, please refer to instructions [here](aif360/aif360-r/README.md).\n\n### Python\n\nSupported Python Configurations:\n\n| OS      | Python version |\n| ------- | -------------- |\n| macOS   | 3.8 – 3.11     |\n| Ubuntu  | 3.8 – 3.11     |\n| Windows | 3.8 – 3.11     |\n\n### (Optional) Create a virtual environment\n\nAIF360 requires specific versions of many Python packages which may conflict\nwith other projects on your system. A virtual environment manager is strongly\nrecommended to ensure dependencies may be installed safely. If you have trouble\ninstalling AIF360, try this first.\n\n#### Conda\n\nConda is recommended for all configurations though Virtualenv is generally\ninterchangeable for our purposes. [Miniconda](https://conda.io/miniconda.html)\nis sufficient (see [the difference between Anaconda and\nMiniconda](https://conda.io/docs/user-guide/install/download.html#anaconda-or-miniconda)\nif you are curious) if you do not already have conda installed.\n\nThen, to create a new Python 3.11 environment, run:\n\n```bash\nconda create --name aif360 python=3.11\nconda activate aif360\n```\n\nThe shell should now look like `(aif360) $`. To deactivate the environment, run:\n\n```bash\n(aif360)$ conda deactivate\n```\n\nThe prompt will return to `$ `.\n\n### Install with `pip`\n\nTo install the latest stable version from PyPI, run:\n\n```bash\npip install aif360\n```\n\nNote: Some algorithms require additional dependencies (although the metrics will\nall work out-of-the-box). To install with certain algorithm dependencies\nincluded, run, e.g.:\n\n```bash\npip install 'aif360[LFR,OptimPreproc]'\n```\n\nor, for complete functionality, run:\n\n```bash\npip install 'aif360[all]'\n```\n\nThe options for available extras are: `OptimPreproc, LFR, AdversarialDebiasing,\nDisparateImpactRemover, LIME, ART, Reductions, FairAdapt, inFairness,\nLawSchoolGPA, notebooks, tests, docs, all`\n\nIf you encounter any errors, try the [Troubleshooting](#troubleshooting) steps.\n\n### Manual installation\n\nClone the latest version of this repository:\n\n```bash\ngit clone https://github.com/Trusted-AI/AIF360\n```\n\nIf you'd like to run the examples, download the datasets now and place them in\ntheir respective folders as described in\n[aif360/data/README.md](aif360/data/README.md).\n\nThen, navigate to the root directory of the project and run:\n\n```bash\npip install --editable '.[all]'\n```\n\n#### Run the Examples\n\nTo run the example notebooks, complete the manual installation steps above.\nThen, if you did not use the `[all]` option, install the additional requirements\nas follows:\n\n```bash\npip install -e '.[notebooks]'\n```\n\nFinally, if you did not already, download the datasets as described in\n[aif360/data/README.md](aif360/data/README.md).\n\n### Troubleshooting\n\nIf you encounter any errors during the installation process, look for your\nissue here and try the solutions.\n\n#### TensorFlow\n\nSee the [Install TensorFlow with pip](https://www.tensorflow.org/install/pip)\npage for detailed instructions.\n\nNote: we require `'tensorflow \u003e= 1.13.1'`.\n\nOnce tensorflow is installed, try re-running:\n\n```bash\npip install 'aif360[AdversarialDebiasing]'\n```\n\nTensorFlow is only required for use with the\n`aif360.algorithms.inprocessing.AdversarialDebiasing` class.\n\n#### CVXPY\n\nOn MacOS, you may first have to install the Xcode Command Line Tools if you\nnever have previously:\n\n```sh\nxcode-select --install\n```\n\nOn Windows, you may need to download the [Microsoft C++ Build Tools for Visual\nStudio 2019](https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools\u0026rel=16).\nSee the [CVXPY Install](https://www.cvxpy.org/install/index.html#mac-os-x-windows-and-linux)\npage for up-to-date instructions.\n\nThen, try reinstalling via:\n\n```bash\npip install 'aif360[OptimPreproc]'\n```\n\nCVXPY is only required for use with the\n`aif360.algorithms.preprocessing.OptimPreproc` class.\n\n## Using AIF360\n\nThe `examples` directory contains a diverse collection of jupyter notebooks\nthat use AI Fairness 360 in various ways. Both tutorials and demos illustrate\nworking code using AIF360. Tutorials provide additional discussion that walks\nthe user through the various steps of the notebook. See the details about\n[tutorials and demos here](examples/README.md)\n\n## Citing AIF360\n\nA technical description of AI Fairness 360 is available in this\n[paper](https://arxiv.org/abs/1810.01943). Below is the bibtex entry for this\npaper.\n\n```\n@misc{aif360-oct-2018,\n    title = \"{AI Fairness} 360:  An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias\",\n    author = {Rachel K. E. Bellamy and Kuntal Dey and Michael Hind and\n\tSamuel C. Hoffman and Stephanie Houde and Kalapriya Kannan and\n\tPranay Lohia and Jacquelyn Martino and Sameep Mehta and\n\tAleksandra Mojsilovic and Seema Nagar and Karthikeyan Natesan Ramamurthy and\n\tJohn Richards and Diptikalyan Saha and Prasanna Sattigeri and\n\tMoninder Singh and Kush R. Varshney and Yunfeng Zhang},\n    month = oct,\n    year = {2018},\n    url = {https://arxiv.org/abs/1810.01943}\n}\n```\n\n## AIF360 Videos\n\n* Introductory [video](https://www.youtube.com/watch?v=X1NsrcaRQTE) to AI\n  Fairness 360 by Kush Varshney, September 20, 2018 (32 mins)\n\n## Contributing\nThe development fork for Rich Subgroup Fairness (`inprocessing/gerryfair_classifier.py`) is [here](https://github.com/sethneel/aif360). Contributions are welcome and a list of potential contributions from the authors can be found [here](https://trello.com/b/0OwPcbVr/gerryfair-development).\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftrusted-ai%2Faif360","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftrusted-ai%2Faif360","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftrusted-ai%2Faif360/lists"}