An open API service indexing awesome lists of open source software.

https://github.com/sandravizz/ai-ethical-challenges

Teaching material for bachelor course at Arcada
https://github.com/sandravizz/ai-ethical-challenges

ai algorithmic-bias ethics-in-ai python recommendation-system

Last synced: about 2 months ago
JSON representation

Teaching material for bachelor course at Arcada

Awesome Lists containing this project

README

        

# AI and ethical challenges

- Arcada - University of Applied Sciences in Helsinki (Finland)
- Professor: Sandra Becker
- Contact: [Via Email](mailto:[email protected])
- Term: Fall 2022
- Lectures: Tuesday & Friday 13 - 16 pm
- [Course material](https://observablehq.com/collection/@sandraviz/ai)

Arcada is a multi-professional University of Applied Sciences in Helsinki (Finland) with the philosophy to work across disciplines and advance culture and knowledge. The curriculum of each degree programme at Arcada is composed of modules. The modules define the competencies that you need to attain in order to graduate.

## Competency aims

"The first generation of graduate students is matriculating who are focused explicitly on the ethics and safety of machine learning. The alignment problem's first responders have arrived at the scene."
Brian Christian

The aim of this course is to provide the students with knowledge about thoughtful, responsible, and ethical uses of machine learning practice, which is a fundamental precondition to trustworthy development of AI. Recommendation systems, which simultaneously shape and predict the future in nearly all parts of human life, will be the main use case during the course.

## Learning outcomes
At the end of the course the student is expected to have an understanding of the techniques behind automated decision making systems related to common ML practice, combined with the skill to judge if the decision seems reasonable or not. The students will have a detailed and holistic knowledge about the concerns and issues regarding algorithmic decision making and how to address them.

Study activities
- Lectures
- Individual studies
- Practical exercises (assignments)
- Final project
- Q+A sessions

All lectures will be 3 hours with one break of 15 min.

## Assessment

- First assignment: 15p
Decision tree example results using normal and biased dataset (python)
- Second assignment: 15p
Recommender system example (python)
- Final Project: 70p
To pass the course the student should, during the project work, show why and how automated decision making is designed and which potential issues and challenges may come with it. The use case will be a recommender system.

## Grading
- 50 - 60 = 1 (pass)
- 61 - 70 = 2
- 71 - 80 = 3
- 81 - 90 = 4
- 91 - 100 = 5

## Syllabus

### [Limitations of AI](https://observablehq.com/@sandraviz/limitations-of-ai?collection=@sandraviz/ai-ethics)

*"Premature optimization is the root of all evil."*

Products and services that rely on machine learning, computer programs that constantly absorb new data and adapt their decisions in response don’t always make ethical or accurate choices.

The first part of the course is about the challenges and limitations of AI. No prediction will ever be 100% correct nor certain, which is acceptable in some case and unacceptable in others. How to be aware of these "errors" and how to address them together with the discussion about their application is the content of the two first lectures.

- Part 1 - 06/09/22
- Part 2 - 09/09/22

### [Algorithmic Bias](https://observablehq.com/@sandraviz/algorithmic-biases?collection=@sandraviz/ai-ethics)

*"The road to a civilized world goes through the dark woods of biases."*.

Algorithmic bias is difficult to define. It is often understood as a systematic and repeatable error in computational systems, that is responsible for unfair, wrongful results of data processing.

In this second part of the course we discussed the different types of algorithmic biases and their consequences especially regarding sexism and racism.

- Part 1 - 13/09/22
- Part 2 - 16/09/22
- Assignment: Decision tree with ethical challenges (python)

### [Recommender Systems](https://observablehq.com/@sandraviz/recommender-systems?collection=@sandraviz/ai-ethics)

*"Bluntly, technology is taken over advice."*.

Recommender systems influence, the videos we watch, the books we read, the music we hear, the games we play, the investments we make, the friends we meet, the food we eat, the news we monitor, the products we buy, the software we code, the photos we share, the events we attend, the people we date, the jobs we apply for, in the end they practically and actually impact increasingly the life we live.

As humans are more and more follow algorithmic recommendation systems the impact also possible negative one needs discussion, which is happening in the third part of the course

- Part 1 - 20/09/22
- Part 2 - 23/09/22
- Assignment: Simple recommender system (python)
- Part 3 - 27/09/22
- Part 4 - 30/09/22

### [Final project](https://observablehq.com/@sandraviz/final-project?collection=@sandraviz/ai-ethics)

*Build, explain and evaluate RS*

The idea is to deepen the student's knowledge about RS and their applications in real life using Spotify as a use case. Moreover they are asked to evaluate ethical challenges.

## Bibliografia

All content in form of research papers, books, wikipedia definitions and web-links used in this module is linked below.

### Research Papers

[When Machine Learning Goes Off the Rails](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3881422).
Boris Babic.
Independent
I. Glenn Cohen.
Harvard Law School
2021

[Three Machine Learning Solutions to the Bias-Variance Dilemma](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3588594).
Marcos López de Prado.
Cornell University.
Operations Research & Industrial Engineering.
2020

[Artificial Intelligence Based Suicide Prediction](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3324874).
Mason Marks
Harvard Law School; Yale Law School; University of New Hampshire Franklin Pierce School of Law; Leiden Law School, Center for Law and Digital Technologies
2019

[Responsibility & Machine Learning: Part of a Process](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2860048).
Jatinder Singh
University of Cambridge
2016

[Social Implication of Algorithmic Bias](https://www.researchgate.net/publication/349120634_SOCIAL_IMPLICATIONS_OF_ALGORITHMIC_BIAS).
Łukasz Iwasiński
University of Warsaw
2020

[Algorithmic Bias in Autonomous Systems](https://www.researchgate.net/publication/318830422_Algorithmic_Bias_in_Autonomous_Systems).
David Danks.
Alex John London.
Carnegie Mellon University.
2017

[Slave to the algorithm? Why a ‘right to an explanation’ is probably not the remedy you are looking for](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2972855).
Lilian Edwards
University of Newcastle - Law School
Michael Veale
University College London, Faculty of Laws; The Alan Turing Institute
2017

[Unfair Machine Learning Algorithms](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3408275).
Runshan Fu
Carnegie Mellon University.
Manmohan Aseri
Katz Graduate School of Business at University of Pittsburgh.
Param Vir Singh, Kannan Srinivasan
Carnegie Mellon University - David A. Tepper School of Business
2020

[Against the Dehumanisation of Decision-Making](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3188080).
Guido Noto La Diega
University of Stirling.
2018

[Recommendation Systems: The Different Filtering Techniques, Challenges and Review Ways to Measure the Recommender System](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3826124).
Mahesh TR, Vivek V
FET - JAIN - Deemed-to-Be University
2019

[Streaming Platform and Strategic Recommendation Bias](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3338744).
Marc Bourreau.
Telecom ParisTech.
Germain Gaudin.
University of Freiburg - College of Economics and Behavioral Sciences.
2018

[Recommender Systems and their ethical challenges](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3378581).
Silvia Milano, Mariarosaria Taddeo, Luciano Floridi
University of Oxford - Oxford Internet Institute
2019

### Books

[The Alignment Problem](https://brianchristian.org/the-alignment-problem/)
Brian Christian
2020
ISBN-13: 978-0393635829

[Machine Learning, revised](https://mitpress.mit.edu/books/machine-learning-revised-and-updated-edition)
Ethem Alpaydin
2021
ISBN-13: 978-0262542524

[Data Science from Scratch](https://github.com/joelgrus/data-science-from-scratch)
Joel Grus
2019 (2nd edition)
ISBN-13: 978-1491901427

[Recommendation Engines](https://mitpress.mit.edu/books/recommendation-engines)
Michael Schrage
2020
ISBN: 978-0262539074

[Hands-On Recommendation Systems with Python](https://github.com/PacktPublishing/Hands-On-Recommendation-Systems-with-Python)
Rounak Banik
2018
ISBN: 978-1788993753

### Wikipedia
[Machine Learning](https://en.wikipedia.org/wiki/Machine_learning)
[Bias–variance tradeoff](https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff)
[Overfitting](https://en.wikipedia.org/wiki/Overfitting)
[Algorithmic bias](https://en.wikipedia.org/wiki/Algorithmic_bias#cite_note-Seaver-11)
[Recommender system ](https://en.wikipedia.org/wiki/Recommender_system)

### Web links

[AI Is All Around You](https://okai.brown.edu/chapter1.html)
[Facebook apology as AI labels black men 'primates'](https://www.bbc.com/news/technology-58462511)
[A visual introduction to machine learning](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/)
[Model Tuning and the Bias-Variance Tradeoff](http://www.r2d3.us/visual-intro-to-machine-learning-part-2/)
[Machine bias](https://www.propublica.org/series/machine-bias)
[Representation and Bias](https://towardsdatascience.com/representation-big-data-and-algorithmic-bias-in-social-data-science-c285350ccc2c)
[Algorithmic Bias](https://towardsdatascience.com/algorithmic-bias-fff4d8c31290)
[How Algorithms Can Fight Bias Instead of Entrench It](https://behavioralscientist.org/how-algorithms-can-fight-bias-instead-of-entrench-it/).
[Anti-racism, algorithmic bias, and policing: a brief introduction](https://towardsdatascience.com/anti-racism-algorithmic-bias-and-policing-a-brief-introduction-bafa0dc75ac6)
[Why algorithms can be racist and sexist](https://www.vox.com/recode/2020/2/18/21121286/algorithms-bias-discrimination-facial-recognition-transparency) [Algorithmic Solutions to Algorithmic Bias: A Technical Guide](https://towardsdatascience.com/algorithmic-solutions-to-algorithmic-bias-aef59eaf6565)
[AI experts say research into algorithms that claim to predict criminality must end](https://www.theverge.com/2020/6/24/21301465/ai-machine-learning-racist-crime-prediction-coalition-critical-technology-springer-study)
[Human - cognitive biases](https://cassandraxia.com/cogbiases/)
[The Human Bias-Accuracy Trade-off](https://towardsdatascience.com/the-human-bias-accuracy-trade-off-ad95e3c612a9)
[Fairness-in-algorithmic-decision-making](https://www.brookings.edu/research/fairness-in-algorithmic-decision-making/)
[Collaborative Filtering based Recommendation Systems exemplified..](https://towardsdatascience.com/collaborative-filtering-based-recommendation-systems-exemplified-ecbffe1c20b1)
[Recommender system in-python part-1](https://towardsdatascience.com/recommender-system-in-python-part-1-preparation-and-analysis-d6bb7939091e)
[Conference on Recommender Systems](https://recsys.acm.org/recsys21/accepted-contributions/)
[Hybrid technique for recommender system](https://dewaleofficial.medium.com/hybrid-technique-for-recommender-system-86cfb748f585)