Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

https://romanlutz.github.io/ResponsibleAI/

A collection of news articles, books, and papers on Responsible AI cases. The purpose is to study these cases and learn from them to avoid repeating the failures of the past.
https://romanlutz.github.io/ResponsibleAI/

artificial-intelligence bias discrimination fairness hiring-algorithms machine-learning responsible-ai

Last synced: 2 months ago
JSON representation

A collection of news articles, books, and papers on Responsible AI cases. The purpose is to study these cases and learn from them to avoid repeating the failures of the past.

Lists

README

        

The following collection is meant to serve as a reference for engineers, data scientists, and others making decisions about building technological solutions for real-world problems. Hopefully, this will help us avoid repeating mistakes of the past by informing the design of new systems or the decision not to build a technological solution at all.

This is a living document, so please send suggestions for additions through "Issues" or feel free to send pull requests. If you find any other problems with links or the articles themselves, please also open an "Issue".

# Fairness

## Lending & Credit approval

- [Gender Bias Complaints against Apple Card Signal a Dark Side to Fintech](https://hbswk.hbs.edu/item/gender-bias-complaints-against-apple-card-signal-a-dark-side-to-fintech)
- [Exploring Racial Discrimination in Mortgage Lending: A Call for Greater Transparency](https://listwithclever.com/real-estate-blog/racial-discrimination-in-mortgage-lending/)
- [DFS Issues Guidance to Life Insurers on Use of “External Data” in Underwriting Decisions](https://www.jdsupra.com/legalnews/dfs-issues-guidance-to-life-insurers-on-45997/)

## Hiring

- [Amazon scraps secret AI recruiting tool that showed bias against women](https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G)
- [Automated Employment Discrimination](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3437631)
- [Help wanted: an examination of hiring algorithms, equity, and bias](https://apo.org.au/node/210071)
- [All the Ways Hiring Algorithms Can Introduce Bias](https://hbr.org/2019/05/all-the-ways-hiring-algorithms-can-introduce-bias)
- [Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3408010)
- [Help Wanted - An Examination of Hiring Algorithms, Equity, and Bias](https://www.upturn.org/static/reports/2018/hiring-algorithms/files/Upturn%20--%20Help%20Wanted%20-%20An%20Exploration%20of%20Hiring%20Algorithms,%20Equity%20and%20Bias.pdf)
- [Wanted: The ‘perfect babysitter.’ Must pass AI scan for respect and attitude.](https://www.washingtonpost.com/technology/2018/11/16/wanted-perfect-babysitter-must-pass-ai-scan-respect-attitude/)
- [Job Screening Service Halts Facial Analysis of Applicants](https://www.wired.com/story/job-screening-service-halts-facial-analysis-applicants/)

## Employee evaluation

- [Houston Schools Must Face Teacher Evaluation Lawsuit](https://www.courthousenews.com/houston-schools-must-face-teacher-evaluation-lawsuit/)
- [How Amazon automatically tracks and fires warehouse workers for ‘productivity’](https://www.theverge.com/2019/4/25/18516004/amazon-warehouse-fulfillment-centers-productivity-firing-terminations)
- [Court Rules Deliveroo Used 'Discriminatory' Algorithm](https://www.vice.com/en/article/7k9e4e/court-rules-deliveroo-used-discriminatory-algorithm)

## Pre-trial risk assessment and criminal sentencing

- [Machine Bias](https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing)
- [How We Analyzed the COMPAS Recidivism Algorithm](https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm)
- [GitHub repository for COMPAS analysis](https://github.com/propublica/compas-analysis)
- [Can you make AI fairer than a judge? Play our courtroom algorithm game](https://www.technologyreview.com/s/613508/ai-fairer-than-judge-criminal-risk-assessment-algorithm/)

## Predictive Policing & Other Law Enforcement Use Cases

- [Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice](https://www.nyulawreview.org/online-features/dirty-data-bad-predictions-how-civil-rights-violations-impact-police-data-predictive-policing-systems-and-justice/)
- [Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots](https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28)
- [The Perpetual Line-Up - Unregulated police face recognition in America](https://www.perpetuallineup.org/)
- [Stuck in a Pattern: Early evidence on "predictive policing" and civil rights](https://www.upturn.org/reports/2016/stuck-in-a-pattern/)
- [Crime-prediction tool PredPol amplifies racially biased policing, study shows](https://www.mic.com/articles/156286/crime-prediction-tool-pred-pol-only-amplifies-racially-biased-policing-study-shows#.DZeqQ4LYs)
- [Criminal machine learning](https://callingbullshit.org/case_studies/case_study_criminal_machine_learning.html)
- [The Liar’s Walk - Detecting Deception with Gait and Gesture](http://gamma.cs.unc.edu/GAIT/files/Deception_LSTM.pdf)
- [Federal study confirms racial bias of many facial-recognition systems, casts doubt on their expanding use](https://www.washingtonpost.com/technology/2019/12/19/federal-study-confirms-racial-bias-many-facial-recognition-systems-casts-doubt-their-expanding-use/)
- [Return of physiognomy? Facial recognition study says it can identify criminals from looks alone](https://www.rt.com/news/368307-facial-recognition-criminal-china/)
- [Live facial recognition is tracking kids suspected of being criminals](https://www.technologyreview.com/2020/10/09/1009992/live-facial-recognition-is-tracking-kids-suspected-of-crime/)

## Admissions

- [British Medical Journal: A blot on the profession](https://www.bmj.com/content/296/6623/657)

## School Choice

- [Custom Software Helps Cities Manage School Choice](https://www.edweek.org/ew/articles/2013/12/04/13algorithm_ep.h33.html)

## Speech Detection

- [Oh dear... AI models used to flag hate speech online are, er, racist against black people](https://www.theregister.co.uk/2019/10/11/ai_black_people/)
- [The Risk of Racial Bias in Hate Speech Detection](https://homes.cs.washington.edu/~msap/pdfs/sap2019risk.pdf)
- [Toxicity and Tone Are Not The Same Thing: analyzing the new Google API on toxicity, PerspectiveAPI.](https://medium.com/@carolinesinders/toxicity-and-tone-are-not-the-same-thing-analyzing-the-new-google-api-on-toxicity-perspectiveapi-14abe4e728b3)
- [Voice Is the Next Big Platform, Unless You Have an Accent](https://www.wired.com/2017/03/voice-is-the-next-big-platform-unless-you-have-an-accent/)
- [Google’s speech recognition has a gender bias](https://makingnoiseandhearingthings.com/2016/07/12/googles-speech-recognition-has-a-gender-bias/)
- [Fair Speech report by Stanford Computational Policy Lab](https://fairspeech.stanford.edu/), also covered in [Speech recognition algorithms may also have racial bias](https://arstechnica.com/science/2020/03/speech-recognition-algorithms-may-also-have-racial-bias/)
- [Automated moderation tool from Google rates People of Color and gays as “toxic”](https://algorithmwatch.org/en/story/automated-moderation-perspective-bias/)
- [Someone made an AI that predicted gender from email addresses, usernames. It went about as well as expected](https://www.theregister.com/2020/07/30/genderify_shuts_down/)

## Image Labelling & Face Recognition

- [Google Photos identified two black people as 'gorillas'](https://mashable.com/2015/07/01/google-photos-black-people-gorillas/)
- [When It Comes to Gorillas, Google Photos Remains Blind](https://www.wired.com/story/when-it-comes-to-gorillas-google-photos-remains-blind/)
- [The viral selfie app ImageNet Roulette seemed fun – until it called me a racist slur](https://www.theguardian.com/technology/2019/sep/17/imagenet-roulette-asian-racist-slur-selfie)
- [Google Is Investigating Why it Trained Facial Recognition on 'Dark Skinned' Homeless People](https://www.vice.com/en_uk/article/43k7yd/google-is-investigating-why-it-trained-facial-recognition-on-dark-skinned-homeless-people)
- [Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification](http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf)
- [Machines Taught by Photos Learn a Sexist View of Women](https://www.wired.com/story/machines-taught-by-photos-learn-a-sexist-view-of-women/)
- [Tenants sounded the alarm on facial recognition in their buildings. Lawmakers are listening.](https://www.msn.com/en-us/news/politics/tenants-sounded-the-alarm-on-facial-recognition-in-their-buildings-lawmakers-are-listening/ar-BBYnaqB)
- [Google apologizes after its Vision AI produced racist results](https://algorithmwatch.org/en/story/google-vision-racism/)
- [When AI Sees a Man, It Thinks 'Official.' A Woman? 'Smile'](https://www.wired.com/story/ai-sees-man-thinks-official-woman-smile/)

## Public Benefits & Health

- [A health care algorithm affecting millions is biased against black patients](https://www.theverge.com/2019/10/24/20929337/care-algorithm-study-race-bias-health)
- [What happens when an algorithm cuts your health care](https://www.theverge.com/2018/3/21/17144260/healthcare-medicaid-algorithm-arkansas-cerebral-palsy)
- [China Knows How to Take Away Your Health Insurance](https://www.bloomberg.com/opinion/articles/2019-06-14/china-knows-how-to-take-away-your-health-insurance)
- [Foretelling the Future: A Critical Perspective on the Use of Predictive Analytics in Child Welfare](http://kirwaninstitute.osu.edu/wp-content/uploads/2017/05/ki-predictive-analytics.pdf)
- [There’s no quick fix to find racial bias in health care algorithms](https://www.theverge.com/2019/12/4/20995178/racial-bias-health-care-algorithms-cory-booker-senator-wyden)
- [Health algorithms discriminate against Black patients, also in Switzerland](https://algorithmwatch.ch/en/racial-health-bias/)

## Ads

- [Discrimination in Online Ad Delivery](https://arxiv.org/abs/1301.6822)
- [Probing the Dark Side of Google’s Ad-Targeting System](https://www.technologyreview.com/s/539021/probing-the-dark-side-of-googles-ad-targeting-system/)
- [Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says](https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html)
- [Facebook Job Ads Raise Concerns About Age Discrimination](https://www.outtengolden.com/facebook-job-ads-raise-concerns-about-age-discrimination-nyt)
- [Facebook Ads Can Still Discriminate Against Women and Older Workers, Despite a Civil Rights Settlement](https://www.propublica.org/article/facebook-ads-can-still-discriminate-against-women-and-older-workers-despite-a-civil-rights-settlement)
- [Women less likely to be shown ads for high-paid jobs on Google, study shows](https://www.theguardian.com/technology/2015/jul/08/women-less-likely-ads-high-paid-jobs-google-study)
- [Algorithms That “Don’t See Color”: Comparing Biases in Lookalike and Special Ad Audiences](https://sapiezynski.com/papers/sapiezynski2019algorithms.pdf)
- [Facebook is letting job advertisers target only men](https://www.propublica.org/article/facebook-is-letting-job-advertisers-target-only-men)
- [Facebook (Still) Letting Housing Advertisers Exclude Users by Race](https://www.propublica.org/article/facebook-advertising-discrimination-housing-race-sex-national-origin)

## Search

- [Algorithms of Oppression: How Search Engines reinforce racism](http://algorithmsofoppression.com/)
- [Bias already exists in search engine results, and it’s only going to get worse](https://www.technologyreview.com/s/610275/meet-the-woman-who-searches-out-search-engines-bias-against-women-and-minorities/)
- [Truth in pictures: What Google image searches tell us about inequality at work](https://www.diversityemployers.com/blog/2017/05/truth-in-pictures-what-google-image-searches-tell-us-about-inequality-at-work/)

## Translations

- [Google Translate might have a gender problem](https://mashable.com/2017/11/30/google-translate-sexism/)

## Jury Selection

- [The Big Data Jury](https://scholarship.law.nd.edu/ndlr/vol91/iss3/2/)

## Dating

- [Coffee Meets Bagel: The Online Dating Site That Helps You Weed Out The Creeps](https://www.laweekly.com/coffee-meets-bagel-the-online-dating-site-that-helps-you-weed-out-the-creeps/)
- [The Biases we feed to Tinder algorithms](https://www.diggitmagazine.com/articles/biases-we-feed-tinder-algorithms)
- [Redesign dating apps to lessen racial bias, study recommends](http://news.cornell.edu/stories/2018/09/redesign-dating-apps-lessen-racial-bias-study-recommends)

## Word Embeddings

Word Embeddings may affect many of the categories above through applications that use them.
- [Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings](https://papers.nips.cc/paper/6228-man-is-to-computer-programmer-as-woman-is-to-homemaker-debiasing-word-embeddings.pdf)

## Gerrymandering

- [When drawing a line is hard](https://medium.com/equal-future/when-drawing-a-line-is-hard-8d92d30c9044)

## Recommender systems

- [Why is TikTok creating filter bubbles based on your race?](https://www.wired.co.uk/article/tiktok-filter-bubbles)

## Picking areas for improved service

- [Amazon Doesn’t Consider the Race of Its Customers. Should It?](https://www.bloomberg.com/graphics/2016-amazon-same-day/)

# Safety

## Self-driving cars

- [Remember the Uber self-driving car that killed a woman crossing the street? The AI had no clue about jaywalkers](https://www.theregister.co.uk/2019/11/06/uber_self_driving_car_death/)
- [Franken-algorithms: The Deadly Consequences of Unpredictable Code](https://getpocket.com/explore/item/franken-algorithms-the-deadly-consequences-of-unpredictable-code)

## Weaponized AI

- [Google employee protest: Now Google backs off Pentagon drone AI project](https://www.zdnet.com/article/google-employee-protests-now-google-backs-off-pentagon-drone-ai-project/)
- [Google Wants to Do Business With the Military—Many of Its Employees Don’t](https://www.bloomberg.com/features/2019-google-military-contract-dilemma/)

## Health

- Model interpretability in Medicine
- [Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission](http://people.dbmi.columbia.edu/noemie/papers/15kdd.pdf) shows importance of model interpretability for such critical decisions.
- [Rich Caruana--Friends Don't Let Friends Release Black Box Models in Medicine](https://www.youtube.com/watch?v=iyGh46NA8tk)
- [IBM pitched its Watson supercomputer as a revolution in cancer care. It’s nowhere close](https://www.statnews.com/2017/09/05/watson-ibm-cancer/)
- [International evaluation of an AI system for breast cancer screening](https://deepmind.com/research/publications/International-evaluation-of-an-artificial-intelligence-system-to-identify-breast-cancer-in-screening-mammography) - [This thread examines the issues with the problem setting.](https://twitter.com/VPrasadMDMPH/status/1212840987363442689?s=20)

# Privacy

## Machine Learning-based privacy attacks

- [Privacy Attacks on Machine Learning Models](https://www.infoq.com/articles/privacy-attacks-machine-learning-models/)

## Lending

- [The new lending game, post-demonetisation](https://tech.economictimes.indiatimes.com/news/technology/the-new-lending-game-post-demonetisation/56367457)
- [Perpetual Debt in the Silicon Savannah](http://bostonreview.net/class-inequality-global-justice/kevin-p-donovan-emma-park-perpetual-debt-silicon-savannah)

## Work

- [Woman fired after disabling work app that tracked her movements 24/7](https://www.theverge.com/2015/5/13/8597081/worker-gps-fired-myrna-arias-xora)
- [Limitless Worker Surveillance](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2746211)

## Prison tech

- [Prison tech company is questioned for retaining ‘voice prints’ of people presumed innocent](https://theappeal.org/jails-across-the-u-s-are-extracting-the-voice-prints-of-people-presumed-innocent/)

## Location data

- [Twelve Million Phones, One Dataset, Zero Privacy](https://www.nytimes.com/interactive/2019/12/19/opinion/location-tracking-cell-phone.html) shines a light on data privacy (or the lack thereof). That same data may be used to for ML as well.
- [Tenants sounded the alarm on facial recognition in their buildings. Lawmakers are listening.](https://www.msn.com/en-us/news/politics/tenants-sounded-the-alarm-on-facial-recognition-in-their-buildings-lawmakers-are-listening/ar-BBYnaqB)
- [Face for sale: Leaks and lawsuits blight Russia facial recognition](https://www.reuters.com/article/us-russia-privacy-lawsuit-feature-trfn/face-for-sale-leaks-and-lawsuits-blight-russia-facial-recognition-idUSKBN27P10U)

## Social media & dating

- [OkCupid Study Reveals the Perils of Big-Data Science](https://www.wired.com/2016/05/okcupid-study-reveals-perils-big-data-science/)
- [“We Are the Product”: Public Reactions to Online Data Sharing and Privacy Controversies in the Media](https://cmci.colorado.edu/~cafi5706/CHI2018_FieslerHallinan.pdf)

## Basic anonymization as an insufficient measure

- [“Anonymized” data really isn’t—and here’s why not](https://arstechnica.com/tech-policy/2009/09/your-secrets-live-online-in-databases-of-ruin/)

## Health

- [Health Insurers Are Vacuuming Up Details About You — And It Could Raise Your Rates](https://www.npr.org/sections/health-shots/2018/07/17/629441555/health-insurers-are-vacuuming-up-details-about-you-and-it-could-raise-your-rates)
- [How Your Medical Data Fuels a Hidden Multi-Billion Dollar Industry](https://time.com/4588104/medical-data-industry/)
- [23andMe's Pharma Deals Have Been the Plan All Along](https://www.wired.com/story/23andme-glaxosmithkline-pharma-deal/)
- [If You Want Life Insurance, Think Twice Before Getting A Genetic Test](https://www.fastcompany.com/3055710/if-you-want-life-insurance-think-twice-before-getting-genetic-testing)
- [Medical Start-up Invited Millions Of Patients To Write Reviews They May Not Realize Are Public. Some Are Explicit.](https://www.forbes.com/sites/kashmirhill/2013/10/21/practice-fusion-patient-privacy-explicit-reviews/#2918de354ae3)
- [Help Desk: Can your medical records become marketing? We investigate a reader’s suspicious ‘patient portal.’](https://www.washingtonpost.com/technology/2019/10/22/help-desk-can-your-medical-records-become-marketing-we-investigate-readers-suspicious-patient-portal/)
- [Is your pregnancy app sharing your intimate data with your boss?](https://www.washingtonpost.com/technology/2019/04/10/tracking-your-pregnancy-an-app-may-be-more-public-than-you-think/)
- [Data Crisis: Who Owns Your Medical Records?](https://www.sandiegomagazine.com/San-Diego-Magazine/October-2016/Top-Doctors-2016-Innovation-in-Health-and-Medicine/Data-Crisis-Who-Owns-Your-Medical-Records/)
- [This Bluetooth Tampon Is the Smartest Thing You Can Put In Your Vagina](https://gizmodo.com/this-bluetooth-tampon-is-the-smartest-thing-you-can-put-1777044090) didn't mention the privacy concerns of such a device. [This Twitter comment adds the necessary comment.](https://twitter.com/DrRanjanaDas/status/1213940445245509671?s=20)

## Face Recognition

- [Clearview AI: We Are ‘Working to Acquire All U.S. Mugshots’ From Past 15 Years](https://onezero.medium.com/clearview-ai-we-are-working-to-acquire-all-u-s-mugshots-from-past-15-years-645d92319f33)
- [Face for sale: Leaks and lawsuits blight Russia facial recognition](https://www.reuters.com/article/us-russia-privacy-lawsuit-feature-trfn/face-for-sale-leaks-and-lawsuits-blight-russia-facial-recognition-idUSKBN27P10U)

## Supply of goods

- [Grocers Stopped Stockpiling Food. Then Came Coronavirus.](https://www.wsj.com/articles/grocers-stopped-stockpiling-food-then-came-coronavirus-11584982605)

# Anti-Money Laundering

- [Trusting Machine Learning in Anti-Money Laundering: A Risk-Based Approach](http://www.caspian.co.uk/rba/RBA.pdf)

# General resources about Responsible AI

Many of the books and articles in this area cover a wide range of topics. Below is a list of a few of them, sorted alphabetically by title:

- [A Hippocratic Oath for artificial intelligence practitioners](https://techcrunch.com/2018/03/14/a-hippocratic-oath-for-artificial-intelligence-practitioners/) by [Oren Etzioni](https://allenai.org/team/orene/)
- [Algorithms, Correcting Biases](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3300171) by [Cass Sunstein](https://hls.harvard.edu/faculty/directory/10871/Sunstein)
- [Algorithms of Oppression - How Search Engines Reinforce Racism](http://algorithmsofoppression.com/) by [Safiya Umoja Noble](https://safiyaunoble.com/)
- [Artificial Unintelligence - How Computers Misunderstand the World](https://mitpress.mit.edu/books/artificial-unintelligence) by [Meredith Broussard](https://merbroussard.github.io/)
- [Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor](https://us.macmillan.com/books/9781250074317) by [Virginia Eubanks](https://virginia-eubanks.com/)
- [Big Data's Disparate Impact](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2477899) by [Solon Barocas](http://solon.barocas.org/) and [Andrew D. Selbst](https://andrewselbst.com/)
- [Datasheets for Datasets](https://arxiv.org/abs/1803.09010) by [Timnit Gebru](http://ai.stanford.edu/~tgebru/) et al.
- [Design Justice](https://design-justice.pubpub.org/) by [Sasha Costanza-Chock](https://www.schock.cc/)
- [Fairness and Abstraction in Sociotechnical Systems](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3265913) by [Andrew D. Selbst](https://andrewselbst.com/), [danah boyd](https://www.danah.org/), [Sorelle Friedler](http://sorelle.friedler.net/), [Suresh Venkatasubramanian](http://www.cs.utah.edu/~suresh/), [Janet Vertesi](https://janet.vertesi.com/)
- [Fairness and machine learning - Limitations and Opportunities](https://fairmlbook.org/) by [Solon Barocas](http://solon.barocas.org/), [Moritz Hardt](https://mrtz.org/), [Arvind Narayanan](http://randomwalker.info/)
- [How I'm fighting bias in algorithms](https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms?language=en) by [Joy Buolamwini](https://www.poetofcode.com/)
- [Race after Technology](https://www.ruhabenjamin.com/race-after-technology) by [Ruha Benjamin](https://www.ruhabenjamin.com)
- [Interpretable Machine Learning](https://christophm.github.io/interpretable-ml-book/) by [Christoph Molnar](https://christophm.github.io/)
- [Tech Ethics Curriculum](https://docs.google.com/spreadsheets/d/1jWIrA8jHz5fYAW4h9CkUD8gKS5V98PDJDymRf8d9vKI/edit#gid=1174187227) by [Casey Fiesler](https://caseyfiesler.com/)
- [The Measure and Mismeasure of Fairness: A Critical Review of Machine Learning](https://5harad.com/papers/fair-ml.pdf) by [Sam Corbett-Davis](https://samcorbettdavies.com/) and [Sharad Goel](https://5harad.com/)
- [Weapons of Math Destruction](https://weaponsofmathdestructionbook.com/) by [Cathy O'Neil](https://mathbabe.org/)