Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/kennethleungty/failed-ml

Compilation of high-profile real-world examples of failed machine learning projects
https://github.com/kennethleungty/failed-ml

ai artificial-intelligence classification computer-vision data-engineering data-quality data-science deep-learning failed-data-science failed-machine-learning failed-ml fml forecasting machine-learning ml natural-language-processing production recsys regression

Last synced: about 1 month ago
JSON representation

Compilation of high-profile real-world examples of failed machine learning projects

Awesome Lists containing this project

README

        

# Failed Machine Learning (FML)
## High-profile real-world examples of failed machine learning projects




> ### “Success is not final, failure is not fatal. It is the courage to continue that counts.” - Winston Churchill

If you are looking for examples of how ML can fail despite all its incredible potential, you have come to the right place. Beyond the wonderful success stories of applied machine learning, here is a list of failed projects which we can learn a lot from.


[![Contributions Welcome!](https://img.shields.io/badge/Contributions-Welcome-brightgreen?style=for-the-badge)](./CONTRIBUTING.md)


## Contents
1. [Classic Machine Learning](#classical-machine-learning)
2. [Computer Vision](#computer-vision)
3. [Forecasting](#forecasting)
4. [Image Generation](#image-generation)
5. [Natural Language Processing](#natural-language-processing)
6. [Recommendation Systems](#recommendation-systems)

___

## Classic Machine Learning
| Title | Description |
| --- | --- |
| [Amazon AI Recruitment System](https://finance.yahoo.com/news/amazon-reportedly-killed-ai-recruitment-100042269.html) | AI-powered automated recruitment system canceled after evidence of discrimination against female candidates |
| [Genderify - Gender identification tool](https://syncedreview.com/2020/07/30/ai-powered-genderify-platform-shut-down-after-bias-based-backlash/) | AI-powered tool designed to identify gender based on fields like name and email address was shut down due to built-in biases and inaccuracies |
| [Leakage and the Reproducibility Crisis in ML-based Science](https://arxiv.org/pdf/2207.07048.pdf) | A team at Princeton University found 20 reviews across 17 scientific fields that discovered significant errors (e.g., data leakage, no train-test split) in 329 papers that use ML-based science |
| [COVID-19 Diagnosis and Triage Models](https://www.technologyreview.com/2021/07/30/1030329/machine-learning-ai-failed-covid-hospital-diagnosis-pandemic/) | Hundreds of predictive models were developed to diagnose or triage COVID-19 patients faster, but ultimately none of them were fit for clinical use, and some were potentially harmful |
| [COMPAS Recidivism Algorithm](https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm) | Florida’s recidivism risk system found evidence of racial bias |
| [Pennsylvania Child Welfare Screening Tool](https://apnews.com/article/child-welfare-algorithm-investigation-9497ee937e0053ad4144a86c68241ef1) | The predictive algorithm (which helps identify which families are to be investigated by social workers for child abuse and neglect) flagged a disproportionate number of Black children for 'mandatory' neglect investigations. |
| [Oregon Child Welfare Screening Tool](https://apnews.com/article/politics-technology-pennsylvania-child-abuse-1ea160dc5c2c203fdab456e3c2d97930) | A similar predictive tool to the one in Pennsylvania, the AI algorithm for child welfare in Oregon was also stopped a month after the Pennsylvania report |
| [U.S. Healthcare System Health Risk Prediction](https://www.science.org/doi/full/10.1126/science.aax2342) | A widely used algorithm to predict healthcare needs exhibited racial bias where for a given risk score, black patients are considerably sicker than white patients |
| [Apple Card Credit Card](https://www.theverge.com/2019/11/11/20958953/apple-credit-card-gender-discrimination-algorithms-black-box-investigation) | Apple’s new credit card (created in partnership with Goldman Sachs) is being investigated by financial regulators after customers complained that the card’s lending algorithms discriminated against women, where the credit line offered by a male customer's Apple Card was 20 times higher than that offered to his spouse |

___

## Computer Vision
| Title | Description |
| --- | --- |
| [Inverness Automated Football Camera System](https://www.theverge.com/tldr/2020/11/3/21547392/ai-camera-operator-football-bald-head-soccer-mistakes) | AI camera football-tracking technology for live streaming repeatedly confused a linesman’s bald head for the ball itself |
| [Amazon Rekognition for US Congressmen](https://www.cnet.com/news/privacy/amazon-facial-recognition-thinks-28-congressmen-look-like-known-criminals-at-default-settings/) | Amazon's facial recognition technology (Rekognition) falsely matched 28 congresspeople with mugshots of criminals, while also revealing racial bias in the algorithm |
| [Amazon Rekognition for law enforcement](https://abcnews.go.com/Technology/wireStory/researchers-amazon-face-detection-technology-shows-bias-60630589?cid=social_twitter_abcn) | Amazon's facial recognition technology (Rekognition) misidentified women as men, particularly those with darker skin |
| [Zhejiang traffic facial recognition system](https://www.theverge.com/2018/11/22/18107885/china-facial-recognition-mistaken-jaywalker) | Traffic camera system (designed to capture traffic offenses) mistook a face on the side of a bus as someone who jaywalked |
| [Kneron tricking facial recognition terminals](https://fortune.com/2019/12/12/airport-bank-facial-recognition-systems-fooled/) | The team at Kneron used high-quality 3-D masks to deceive AliPay and WeChat payment systems to make purchases |
| [Twitter smart cropping tool](https://gizmodo.com/twitters-scrambling-to-figure-out-why-its-photo-preview-1845123415) | Twitter's auto-crop tool for photo review displayed evident signs of racial bias |
| [Depixelator tool](https://www.theverge.com/21298762/face-depixelizer-ai-machine-learning-tool-pulse-stylegan-obama-bias) | Algorithm (based on StyleGAN) designed to generate depixelated faces showed signs of racial bias, with image output skewed towards the white demographic |
| [Google Photos tagging](https://www.bbc.com/news/technology-33347866) | The automatic photo tagging capability in Google Photos mistakenly labeled black people as gorillas |
| [GenderShades evaluation of gender classification products](https://gendershades.org/overview.html) | GenderShades' research revealed that Microsoft and IBM’s face-analysis services for identifying the gender of people in photos frequently erred when analyzing images of women with dark skin |
| [New Jersey Police Facial Recognition](https://edition.cnn.com/2021/04/29/tech/nijeer-parks-facial-recognition-police-arrest/index.html) | A false facial recognition match by New Jersey police landed an innocent black man (Nijeer Parks) in jail even though he was 30 miles away from the crime |
| [Tesla's dilemma between a horse cart and a truck](https://www.thesun.co.uk/motors/19537820/tesla-driver-gets-stuck-behind-a-horse-and-cart/) | Tesla's visualization system got confused by mistaking a horse carriage as a truck with a man walking behind it |
| [Google's AI for Diabetic Retinopathy Detection](https://www.technologyreview.com/2020/04/27/1000658/google-medical-ai-accurate-lab-real-life-clinic-covid-diabetes-retina-disease/) | The retina scanning tool fared much worse in real-life settings than in controlled experiments, with issues such as rejected scans (from poor scan image quality) and delays from intermittent internet connectivity when uploading images to the cloud for processing |

___

## Forecasting
| Title | Description |
| --- | --- |
| [Google Flu Trends](https://hbr.org/2014/03/google-flu-trends-failure-shows-good-data-big-data) | Flu prevalence prediction model based on Google searches produced inaccurate over-estimates |
| [Zillow iBuying algorithms](https://www.wired.com/story/zillow-ibuyer-real-estate/) | Significant losses in Zillow's home-flipping business due to inaccurate (overestimated) prices from property valuation models |
| [Tyndaris Robot Hedge Fund](https://futurism.com/investing-lawsuit-ai-trades-cost-millions) | AI-powered automated trading system controlled by a supercomputer named K1 resulted in big investment losses, culminating in a lawsuit |
| [Sentient Investment AI Hedge Fund](https://www.bloomberg.com/news/articles/2018-09-07/ai-hedge-fund-sentient-is-said-to-shut-after-less-than-two-years) | The once high flying AI-powered fund at Sentient Investment Management failed to make money and was promptly liquidated in less than 2 years |
| [JP Morgan's Deep Learning Model for FX Algos](https://www.risk.net/derivatives/7958022/jp-morgan-pulls-plug-on-deep-learning-model-for-fx-algos) | JP Morgan has phased out a deep neural network for foreign exchange algorithmic execution, citing issues with data interpretation and the complexity involved. |

___

## Image Generation
| Title | Description |
| --- | --- |
| [Playground AI facial generation](https://www.bostonglobe.com/2023/07/19/business/an-mit-student-asked-ai-make-her-headshot-more-professional-it-gave-her-lighter-skin-blue-eyes/) | When asked to turn an image of an Asian headshot into a professional LinkedIn profile photo, the AI image editor generated an output with features that made it look Caucasian instead |
| [Stable Diffusion Text-to-Image Model](https://www.bloomberg.com/graphics/2023-generative-ai-bias/) | In an experiment run by Bloomberg, it was found that Stable Diffusion (text-to-image model) exhibited racial and gender bias in the thousands of generated images related to job titles and crime |
| [Historical Inaccuracies in Gemini Image Generation](https://www.theverge.com/2024/2/22/24079876/google-gemini-ai-photos-people-pause) | Google's Gemini image generation feature was found to be generating inaccurate historical image depictions in its attempt to subvert gender and racial stereotypes, such as returning non-white AI-generated people when prompted to generate USA's founding fathers |

___

## Natural Language Processing
| Title | Description |
| --- | --- |
| [Microsoft Tay Chatbot](https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist) | Chatbot that posted inflammatory and offensive tweets through its Twitter account |
| [Nabla Chatbot](https://www.theregister.com/2020/10/28/gpt3_medical_chatbot_experiment) | Experimental chatbot (for medical advice) using a cloud-hosted instance of GPT-3 advised a mock patient to commit suicide |
| [Facebook Negotiation Chatbots](https://www.techtimes.com/articles/212124/20170730/facebook-ai-invents-language-that-humans-cant-understand-system-shut-down-before-it-evolves-into-skynet.htm) | The AI system was shut down after the chatbots stopped using English in their negotiations and started using a language that they created by themselves |
| [OpenAI GPT-3 Chatbot Samantha](https://futurism.com/openai-dead-fiancee) | A GPT-3 chatbot fine-tuned by indie game developer Jason Rohrer to emulate his dead fiancée was shut down by OpenAI after Jason refused their request to insert an automated monitoring tool amidst concerns of the chatbot being racist or overtly sexual |
| [Amazon Alexa plays porn](https://nypost.com/2016/12/30/toddler-asks-amazons-alexa-to-play-song-but-gets-porn-instead/) | Amazon's voice-activated digital assistant unleashed a torrent of raunchy language after a toddler asked it to play a children’s song. |
| [Galactica - Meta's Large Language Model](https://www.technologyreview.com/2022/11/18/1063487/meta-large-language-model-ai-only-survived-three-days-gpt-3-science/) | A problem with Galactica was that it could not distinguish truth from falsehood, a basic requirement for a language model designed to generate scientific text. It was found to make up fake papers (sometimes attributing them to real authors), and generated articles about the history of bears in space as readily as ones about protein complexes. |
| [Energy Firm in Voice Mimicry Fraud](https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402) | Cybercriminals used AI-based software to impersonate the voice of a CEO to demand a fraudulent money transfer as part of the voice-spoofing attack |
| [MOH chatbot dispenses safe sex advice when asked Covid-19 questions](https://www.channelnewsasia.com/singapore/moh-ask-jamie-covid-19-query-social-media-2222571) | The 'Ask Jamie' chatbot by the Singapore Ministry of Health (MOH) was temporarily disabled after it provided misaligned replies around safe sex when asked about managing positive COVID-19 results |
| [Google's BARD Chatbot Demo](https://www.theverge.com/2023/2/8/23590864/google-ai-chatbot-bard-mistake-error-exoplanet-demo) | In its first public demo advertisement, BARD made a factual error regarding which satellite first took pictures of a planet outside the Earth's solar system. |
| [ChatGPT Categories of Failures](https://paperswithcode.com/paper/a-categorical-archive-of-chatgpt-failures) | An analysis of the ten categories of failures seen in ChatGPT so far, including reasoning, factual errors, math, coding, and bias. |
| [TikTokers roasting McDonald's hilarious drive-thru AI order fails](https://www.businessinsider.com/tiktokers-show-failures-with-mcdonalds-drive-thru-ai-robots-2023-2)| Some samples where a production/deployed voice assistant fails to get orders right and leads to brand/reputation damage for McDonalds|
| [Bing Chatbot's Unhinged Emotional Behavior](https://www.theverge.com/2023/2/15/23599072/microsoft-ai-bing-personality-conversations-spy-employees-webcams) | In certain conversations, Bing's chatbot was found to reply with argumentative and emotional responses |
| [Bing's AI quotes COVID disinformation sourced from ChatGPT](https://techcrunch.com/2023/02/08/ai-is-eating-itself-bings-ai-quotes-covid-disinfo-sourced-from-chatgpt/) | Bing's response to a query on COVID-19 anti-vaccine advocacy was inaccurate and based on false information from unreliable sources |
| [AI-generated 'Seinfeld' suspended on Twitch for transphobic jokes](https://techcrunch.com/2023/02/06/ai-generated-seinfeld-suspended-on-twitch-for-ai-generated-transphobic-jokes/) | A mistake with the AI’s content filter resulted in the character 'Larry' delivering a transphobic standup routine. |
| [ChatGPT cites bogus legal cases](https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html) | A lawyer used OpenAI's popular chatbot ChatGPT to "supplement" his own findings but was provided with completely manufactured previous cases that do not exist |
| [Air Canada chatbot gives erroneous information](https://www.forbes.com/sites/marisagarcia/2024/02/19/what-air-canada-lost-in-remarkable-lying-ai-chatbot-case/?sh=642b9f35696f) | Air Canada's AI-powered chabot hallucinated an answer inconsistent with airline policy with regard to bereavement fares. |
| [AI bot performed illegal insider trading and lied about its actions](https://www.businessinsider.com/ai-bot-gpt-4-financial-insider-trading-lied-2023-11) | An AI investment management system chatbot called Alpha (built on OpenAI's GPT-4, developed by Apollo Research) demonstrated that it was capable of making illegal financial trades and lying about its actions.

___

## Recommendation Systems
| Title | Description |
| --- | --- |
| [IBM's Watson Health](https://www.theverge.com/2018/7/26/17619382/ibms-watson-cancer-ai-healthcare-science) | IBM’s Watson allegedly provided numerous unsafe and incorrect recommendations for treating cancer patients |
| [Netflix - $1 Million Challenge](https://netflixtechblog.com/netflix-recommendations-beyond-the-5-stars-part-1-55838468f429) | The recommender system that won the $1 Million challenge improved the proposed baseline by 8.43%. However, this performance gain did not seem to justify the engineering effort needed to bring it into a production environment. |