https://github.com/BoulderDS/human-centered-machine-learning
https://github.com/BoulderDS/human-centered-machine-learning
Last synced: 18 days ago
JSON representation
- Host: GitHub
- URL: https://github.com/BoulderDS/human-centered-machine-learning
- Owner: BoulderDS
- Created: 2020-01-06T22:10:34.000Z (almost 6 years ago)
- Default Branch: master
- Last Pushed: 2020-02-18T19:16:00.000Z (almost 6 years ago)
- Last Synced: 2024-04-07T06:34:12.630Z (over 1 year ago)
- Size: 37.1 KB
- Stars: 53
- Watchers: 5
- Forks: 6
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- Explainable-ML-Papers - course
README
Human-centered Machine Learning (Spring 2020)
============================
## Course staff
Insturctor: Chenhao Tan [contact](mailto:Chenhao.Tan@colorado.edu)
Office hours: 2:15-3:00pm on Monday, 12:00-1:00pm on Wednesday, or by appointment (ECES 118A)
## Logistics
* Location and time: ECES 112, 1:00-2:15pm on Mondays and Wednesdays
* [Syllabus](syllabus.md) (Must READ if you are taking the course)
* [Piazza](https://piazza.com/class/k52x8u1t89s10e)
Schedule
===========================
* Week 1: Introduction
* Jan 13, Introduction
* Jan 15, [Ask not what AI can do, but what AI should do: Towards a Framework of Task Delegability](https://arxiv.org/abs/1902.03245). Brian Lubars and Chenhao Tan. NeurIPS 2019.
_Additional reading_:
* [Human-centered Machine Learning: a Machine-in-the-loop Approach](https://medium.com/@ChenhaoTan/human-centered-machine-learning-a-machine-in-the-loop-approach-ed024db34fe7), Chenhao Tan.
* Week 2: You are not so Smart
* Jan 20, MLK.
* Jan 22, [Human Decisions and Machine Predictions](https://academic.oup.com/qje/article/doi/10.1093/qje/qjx032/4095198/Human-Decisions-and-Machine-Predictions#).
Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, Sendhil Mullainathan.
Quarterly Journal of Economics, 2018.
_Additional reading_:
* [Judgment under Uncertainty: Heuristics and Biases. Amos Tversky and Daniel Kahneman](http://science.sciencemag.org/content/185/4157/1124), Science, 1974.
* [Assessing Human Error Against a Benchmark of Perfection](https://arxiv.org/abs/1606.04956). Ashton Anderson, Jon Kleinberg, Sendhil Mullainathan. KDD 2016.
* [Algorithm Aversion: People Erroneously Avoid Algorithms
After Seeing Them Err](http://opim.wharton.upenn.edu/risk/library/WPAF201410-AlgorthimAversion-Dietvorst-Simmons-Massey.pdf). Berkeley J. Dietvorst, Joseph P. Simmons, and Cade Massey, Journal of Experimental Psychology: General, 2014.
* Podcast: [You are not so smart](https://youarenotsosmart.com/podcast/)
* [The effect of wording on message propagation: Topic- and author-controlled natural experiments on Twitter](https://chenhaot.com/papers/wording-for-propagation.html). Chenhao Tan, Lillian Lee, Bo Pang. ACL 2014.
* Thinking, Fast and Slow. Daniel Kahneman. 2011.
* Week 3: Machine-in-the-loop Interactions
* Jan 27, [Principles of Mixed-Initiative User Interfaces](http://erichorvitz.com/chi99horvitz.pdf). Eric Horvitz. In Proceedings of CHI, 1999.
* Jan 29, [Towards A Rigorous Science of Interpretable Machine Learning](https://arxiv.org/abs/1702.08608). Finale Doshi-Velez and Been Kim.
_Additional reading_:
* [Guidelines for Human-AI Interaction](https://www.microsoft.com/en-us/research/uploads/prod/2019/01/Guidelines-for-Human-AI-Interaction-camera-ready.pdf). Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson,
Jina Suh, Shamsi Iqbal, Paul N. Bennett, Kori Inkpen, Jaime Teevan, Ruth Kikin-Gil, and Eric Horvitz. CHI 2019.
* [Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda](https://dl.acm.org/doi/10.1145/3173574.3174156). Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y. Lim, Mohan Kankanhalli. CHI 2018.
* [Interpretable machine learning: definitions, methods, and applications](https://arxiv.org/abs/1901.04592). W. James Murdoch, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, Bin Yu. PNAS 2019.
* [Natural Language Translation at the Intersection of AI and HCI](https://dl.acm.org/citation.cfm?id=2798086). Spence Gree, Jeffrey Heer, Christopher D. Manning. Queue, 2015.
* [A Review of User Interface Design for Interactive Machine Learning](https://www.repository.cam.ac.uk/bitstream/handle/1810/274032/TIIS_Special_Issue_IML_Survey.pdf). John J. Dudley and Per Ola Kristensson. ACM Transactions on Interactive Intelligent Systems. 2018.
* [Beyond binary choices: Integrating individual and social creativity](https://www.sciencedirect.com/science/article/pii/S1071581905000479). Gerhard Fischer, Elisa Giaccardi, Hal Eden, Masanori Sugimoto, Yunwen Ye. International Journal of Human-Computer Studies, 2005.
* [The Mythos of Model Interpretability](https://arxiv.org/abs/1606.03490), Zachary C. Lipton.
* [A Human-Centered Agenda for Intelligible Machine Learning](http://www.jennwv.com/papers/intel-chapter.pdf). Jennifer Wortman Vaughan and Hanna Wallach.
* [Who is the “Human” in Human-Centered Machine Learning: The Case of Predicting Mental Health from Social Media](http://steviechancellor.com/wp-content/uploads/2019/09/HCML-CSCW-2019.pdf). Stevie Chancellor, Eric P.S Baumer, and Munmun De Choudhury. CSCW 2019.
* [Human Centered Systems in the Perspective of Organizational and Social Informatics](https://scholarworks.iu.edu/dspace/bitstream/handle/2022/1798/wp97-04B.html). Rob Kling and Leigh Star. 1997.
**Replication playground/paper critique due on Jan 31; although I accept this homework one week late.**
* Week 4: Feature-based Explanations
* Feb 3, [Why should I trust you?: Explaining the Predictions of Any Classifier](https://arxiv.org/abs/1602.04938). Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin. KDD 2016.
* Feb 5, [Attention is not not explanations](https://arxiv.org/abs/1908.04626). Sarah Wiegreffe, Yuval Pinter. EMNLP 2019.
_Additional reading_:
* [A Unified Approach to Interpreting Model Predictions](https://arxiv.org/abs/1705.07874). Scott Lundberg, Su-In Lee. NeurIPS 2017.
* [Attention is not explanations](https://arxiv.org/abs/1902.10186) Sarthak Jain, Byron C. Wallace. NAACL 2019.
* [Is Attention Interpretable?](https://arxiv.org/abs/1906.03731). Sofia Serrano, Noah A. Smith. ACL 2019.
* [Many Faces of Feature Importance: Comparing Built-in and Post-hoc Feature Importance in Text Classification](https://arxiv.org/abs/1910.08534). Vivian Lai, Jon Z. Cai, Chenhao Tan. EMNLP 2019.
* [DeepXplore: Automated Whitebox Testing of Deep Learning Systems](https://arxiv.org/abs/1705.06640), Pei, Cao, Yang and Jana. In Proceedings of SOSP, 2017.
* [Rationalizing Neural Predictions](https://people.csail.mit.edu/taolei/papers/emnlp16_rationale.pdf), Tao Lei, Regina Barzilay and Tommi Jaakkola. In Proceedings of EMNLP, 2016.
* [Learning Explanatory Rules from Noisy Data](https://arxiv.org/abs/1711.04574). Richard Evans, Edward Grefenstette. Journal of Artificial Intelligence Research, 2018.
* [Network Dissection: Quantifying Interpretability of Deep Visual Representations](http://netdissect.csail.mit.edu/final-network-dissection.pdf). David Bau, Bolei Zhou, Aditya Khosla, Aude Oliva, Antonio Torralba. In Proceedings of CVPR 2017.
**First proposal due on Feb 7**
* Week 5: First Proposal
* Feb 10, presentation & discussion
* Feb 12, presentation & discussion
**First proposal peer feedback due on Feb 14.**
* Week 6: Example-based explanations
* Feb 17, [Examples are not Enough, Learn to Criticize!
Criticism for Interpretability](http://papers.nips.cc/paper/6300-examples-are-not-enough-learn-to-criticize-criticism-for-interpretability.pdf). Been Kim, Rajiv Khanna, Oluwasanmi Koyejo. NeurIPS 2016.
* Feb 19, [Deep Weighted Averaging Classifiers](https://arxiv.org/abs/1811.02579). Dallas Card, Michael Zhang, Noah A. Smith. FAT* 2019.
_Additional reading_:
* [Case-based explanation of non-case-based learning methods](https://www.ncbi.nlm.nih.gov/pubmed/10566351). Rich Caruana, Hooshang Kangarloo, John David
N. Dionisio, Usha Sinha, David Johnson. AMIA 1999.
* [How Case Based Reasoning Explained Neural Networks: An XAI Survey of Post-Hoc Explanation-by-Example in ANN-CBR Twins](https://arxiv.org/abs/1905.07186). Mark T Keane, Eoin M Kenny. ICCBR 2019.
* [Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning](https://arxiv.org/abs/1803.04765). Nicolas Papernot, Patrick McDaniel.
* [Interactive and Interpretable Machine Learning Models for Human Machine Collaboration](http://people.csail.mit.edu/beenkim/papers/BKimPhDThesis.pdf), Been Kim, PhD thesis.
**Second proposal due on Feb 21.**
* Week 7: Second proposal
* Feb 24, presentation & discussion
* Feb 26, presentation & discussion
**Second proposal peer feedback due on Feb 28.**
* Week 8: Counterfactual explanations
* Mar 2, [Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR](https://arxiv.org/abs/1711.00399). Sandra Wachter, Brent Mittelstadt, Chris Russell.
* Mar 4, [Explaining Machine Learning Classifiers through Diverse Counterfactual Explanations](https://arxiv.org/abs/1905.07697). Ramaravind Kommiya Mothilal, Amit Sharma, Chenhao Tan. FAT* 2020.
Additional reading
* [The Hidden Assumptions Behind Counterfactual Explanations and Principal Reasons](https://arxiv.org/abs/1912.04930). Solon Barocas, Andrew D. Selbst, Manish Raghavan. FAT* 2020.
* [Efficient Search for Diverse Coherent Explanations](https://arxiv.org/abs/1901.04909). Chris Russell. FAT* 2019.
**Team formation due on Mar 7.**
* Week 9: Adversarial attacks
* Mar 9, [Universal Adversarial Triggers for Attacking and Analyzing NLP](https://arxiv.org/abs/1908.07125). Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, Sameer Singh. EMNLP 2019.
* Mar 11, [Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods](https://arxiv.org/abs/1911.02508). Dylan Slack, Sophie Hilgard, Emily Jia, Sameer Singh, Himabindu Lakkaraju. AIES 2020.
_Additional reading_:
* [Misleading Failures of Partial-input Baselines](https://arxiv.org/abs/1905.05778). Eric Wallace, Shi Feng, and Jordan Boyd-Graber. ACL 2019.
* Week 10: Human-AI interaction --- Decision Making
* Mar 16, [On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection](https://arxiv.org/abs/1811.07901). Vivian Lai, Chenhao Tan. FAT* 2019.
* Mar 18, [The Principles and Limits of Algorithm-in-the-Loop Decision Making](https://www.benzevgreen.com/wp-content/uploads/2019/09/19-cscw.pdf). Ben Green, Yiling Chen. CSCW 2019.
_Additional reading_:
* [Prediction Policy Problems](https://www.aeaweb.org/articles?id=10.1257/aer.p20151023). Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, Ziad Obermeyer. American Economic Review, 2015.
* [The Tyranny of Data? The Bright and Dark Sides of Data-Driven Decision-Making for Social Good](https://link.springer.com/chapter/10.1007/978-3-319-54024-5_1). Bruno Lepri, Jacopo Staiano, David Sangokoya, Emmanuel Letouzé, and Nuria Oliver. Transparent Data Mining for Big and Small Data, 2017.
* [Predicting the knowledge–recklessness distinction in the human brain](http://www.pnas.org/content/114/12/3222.full). Iris Vilares, Michael J. Wesley, Woo-Young Ahn, Richard J. Bonnie, Morris Hoffman, Owen D. Jones, Stephen J. Morse, Gideon Yaffe, Terry Lohrenz, and P. Read Montague. PNAS, 2016.
* Week 11: Spring break
* Week 12: Human-AI interaction --- Creative writing
* Mar 30, [Creative Writing with a Machine in the Loop: Case Studies on Slogans and Stories](https://chenhaot.com/papers/creative-writing-with-a-machine-in-the-loop.html). In Proceedings of IUI, 2017.
* Apr 1, [Creative Help: A Story Writing Assistant](http://people.ict.usc.edu/~gordon/publications/ICIDS15.PDF). Melissa Roemmele, Andrew S. Gordon. In Proceedings of ICIDS, 2015.
_Additional reading_:
* [Inside Jokes: Identifying Humorous Cartoon Captions](http://www.cs.huji.ac.il/~dshahaf/pHumor.pdf). Dafna Shahaf, Eric Horvitz, Robert Mankoff. In Proceedings of KDD, 2015.
* [Hafez: an Interactive Poetry Generation System](http://xingshi.me/data/pdf/ACL2017demo.pdf). Marjan Ghazvininejad, Xing Shi, Jay Priyadarshi, and Kevin Knight. In Proceedings of ACL, 2017 (Demo Track).
* Week 13: Project midpoint presentation
* Apr 6, free time to work on projects
* Apr 8, presentation
**Project peer feedback due on Apr 10.**
* Week 14: Human-AI interaction --- Trust
* Apr 13, [Trust in automation: Designing for appropriate reliance](https://journals.sagepub.com/doi/10.1518/hfes.46.1.50_30392). John Lee and Katrina See. Human factors, 2004.
* Apr 15, [Understanding the Effect of Accuracy on Trust in
Machine Learning Models](http://mingyin.org/CHI-19/accuracy.pdf). Ming Yin, Jennifer Wortman Vaughan, and Hanna Wallach. CHI 2019.
_Additional reading_:
* [Let Me Explain: Impact of Personal and Impersonal Explanations on Trust in Recommender Systems](https://dl.acm.org/doi/10.1145/3290605.3300717). Johannes Kunkel, Tim Donkers, Lisa Michael, Catalin-Mihai Barbu, and Jürgen Ziegler. CHI 2019.
* Week 15: Fairness, Accountability, and Transparency
* Apr 20, [European Union regulations on algorithmic decision-making and a "right to explanation"](https://arxiv.org/pdf/1606.08813.pdf). Bryce Goodman and Seth Flaxman.
* Apr 22, [Roles for Computing in Social Change](https://arxiv.org/abs/1912.04883). Rediet Abebe, Solon Barocas, Jon Kleinberg, Karen Levy, Manish Raghavan, David G. Robinson. FAT* 2020.
_Additional reading_:
* [Equality of opportunity in supervised learning](https://arxiv.org/abs/1610.02413). Moritz Hardt, Eric Price, Nathan Srebro. NeurIPS 2016.
* [Inherent Trade-Offs in the Fair Determination of Risk Scores](https://arxiv.org/abs/1609.05807). Jon Kleinberg, Sendhil Mullainathan, Manish Raghavan. ITCS 2017.
* [Algorithmic decision making and the cost of fairness](https://arxiv.org/abs/1701.08230). Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, Aziz Huq. KDD 2017.
* [Fairness and Abstraction in Sociotechnical Systems](https://dl.acm.org/doi/10.1145/3287560.3287598). Andrew D. Selbst, danah boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, Janet Vertesi. FAT* 2019.
* Week 16: Final Project Presentation
* TBD, presentations (could be different from normal class time).
**Final project report due on May 1.**