https://github.com/eloiz/awesome_explainable_driving
A curated list of papers on explainability and interpretability of self-driving models
https://github.com/eloiz/awesome_explainable_driving
List: awesome_explainable_driving
autonomous-driving autonomous-vehicles explainable explainable-ai explainable-deepneuralnetwork explainable-dnn-architecture explainable-ml interpretability interpretable-machine-learning interpretable-neural-networks motion-planning self-driving self-driving-car self-driving-vehicle
Last synced: 19 days ago
JSON representation
A curated list of papers on explainability and interpretability of self-driving models
- Host: GitHub
- URL: https://github.com/eloiz/awesome_explainable_driving
- Owner: EloiZ
- Created: 2021-01-27T09:24:09.000Z (about 5 years ago)
- Default Branch: main
- Last Pushed: 2021-01-27T09:54:00.000Z (about 5 years ago)
- Last Synced: 2025-04-14T03:02:10.054Z (10 months ago)
- Topics: autonomous-driving, autonomous-vehicles, explainable, explainable-ai, explainable-deepneuralnetwork, explainable-dnn-architecture, explainable-ml, interpretability, interpretable-machine-learning, interpretable-neural-networks, motion-planning, self-driving, self-driving-car, self-driving-vehicle
- Homepage:
- Size: 3.91 KB
- Stars: 6
- Watchers: 0
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# awesome_explainable_driving
A curated list of papers on explainability and interpretability of self-driving models
Most of the references below are organized and discuss in the following survey:
* **Explainability of vision-based autonomous driving systems: Review and challenges** (submitted to IJCV), Eloi Zablocki, Hédi Ben-Younes, Patrick Pérez, Matthieu Cord [[arxiv]](https://arxiv.org/abs/2101.05307)
## Table of Contents
* [Saliency maps](#saliency-maps)
* [Counterfactual interventions and causal inference](#counterfactual-interventions-and-causal-inference)
* [Representation](#representation)
* [Evaluation](#evaluation)
* [Attention maps](#attention-maps)
* [Semantic inputs](#semantic-inputs)
* [Predicting intermediate representations](#predicting-intermediate-representations)
* [Output interpretability](#output-interpretability)
* [Natural language explanations](#natural-language-explanations)
* [Datasets](#datasets)
## Saliency maps
* **Explaining How a Deep Neural Network Trained with End-to-End Learning Steers a Car** (2017, arxiv), Mariusz Bojarski, Philip Yeres, Anna Choromanska, Krzysztof Choromanski, Bernhard Firner, Lawrence Jackel, Urs Muller [[arxiv]](https://arxiv.org/abs/1704.07911)
* **Visualbackprop: Efficient visualization of cnns for au-tonomous driving** (2018, ICRA), Mariusz Bojarski, Anna Choromanska, Krzysztof Choromanski, Bernhard Firner, Larry Jackel, Urs Muller, Karol Zieba [[arxiv]](https://arxiv.org/abs/1611.05418)
* **Interpretable learning for self-driving cars by visualizing causal attention** (2017, ICCV), Jinkyu Kim, John Canny [[arxiv]](https://arxiv.org/abs/1703.10631)
* **Conditional Affordance Learning for Driving in Urban Environments** (2018, CoRL), Axel Sauer, Nikolay Savinov, Andreas Geiger [[arxiv]](https://arxiv.org/abs/1806.06498)
* **Interpretable Self-Attention Temporal Reasoning for Driving Behavior Understanding** (2020, ICASSP), Yi-Chieh Liu, Yung-An Hsieh, Min-Hung Chen, Chao-Han Huck Yang, Jesper Tegner, Yi-Chang James Tsai [[arxiv]](https://arxiv.org/abs/1911.02172)
## Counterfactual interventions and causal inference
* **Explaining How a Deep Neural Network Trained with End-to-End Learning Steers a Car** (2017, arxiv), Mariusz Bojarski, Philip Yeres, Anna Choromanska, Krzysztof Choromanski, Bernhard Firner, Lawrence Jackel, Urs Muller [[arxiv]](https://arxiv.org/abs/1704.07911)
* **Who Make Drivers Stop? Towards Driver-centric Risk Assessment: Risk Object Identification via Causal Inference** (2020, IROS), Chengxi Li, Stanley H. Chan, Yi-Ting Chen [[arxiv]](https://arxiv.org/abs/2003.02425)
* **ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst** (2019 Robotics, Science and Systems), Mayank Bansal, Alex Krizhevsky, Abhijit Ogale [[arxiv]](https://arxiv.org/abs/1812.03079)
## Representation
* **DeepTest: Automated Testing of Deep-Neural-Network-driven Autonomous Cars** (2018, ICSE), Yuchi Tian, Kexin Pei, Suman Jana, Baishakhi Ray [[arxiv]](https://arxiv.org/abs/1708.08559)
## Evaluation
* **ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst** (2019 Robotics, Science and Systems), Mayank Bansal, Alex Krizhevsky, Abhijit Ogale [[arxiv]](https://arxiv.org/abs/1812.03079)
* **Learning Accurate and Human-Like Driving using Semantic Maps and Attention** (2020, IROS), Simon Hecker, Dengxin Dai, Alexander Liniger, Luc Van Gool [[arxiv]](https://arxiv.org/abs/2007.07218)
* **DeepTest: Automated Testing of Deep-Neural-Network-driven Autonomous Cars** (2018, ICSE), Yuchi Tian, Kexin Pei, Suman Jana, Baishakhi Ray [[arxiv]](https://arxiv.org/abs/1708.08559)
## Attention maps
* **Interpretable learning for self-driving cars by visualizing causal attention** (2017, ICCV), Jinkyu Kim, John Canny [[arxiv]](https://arxiv.org/abs/1703.10631)
* **Deep Object-Centric Policies for Autonomous Driving** (2019, ICRA), Dequan Wang, Coline Devin, Qi-Zhi Cai, Fisher Yu, Trevor Darrell [[arxiv]](https://arxiv.org/abs/1811.05432)
* **Attentional Bottleneck: Towards an Interpretable Deep Driving Network** (2020, arxiv), Jinkyu Kim, Mayank Bansal [[arxiv]](https://arxiv.org/abs/2005.04298)
* **Learning Accurate and Human-Like Driving using Semantic Maps and Attention** (2020, IROS), Simon Hecker, Dengxin Dai, Alexander Liniger, Luc Van Gool [[arxiv]](https://arxiv.org/abs/2007.07218)
## Semantic inputs
## Predicting intermediate representations
## Output interpretability
## Natural language explanations
## Datasets