https://github.com/modanesh/differential_ig
Source code for the differential saliency method used in "Re-understanding Finite-State Representations of Recurrent Policy Networks"
https://github.com/modanesh/differential_ig
computer-vision explainable-ai pytorch reinforcement-learning saliency
Last synced: 6 months ago
JSON representation
Source code for the differential saliency method used in "Re-understanding Finite-State Representations of Recurrent Policy Networks"
- Host: GitHub
- URL: https://github.com/modanesh/differential_ig
- Owner: modanesh
- Created: 2020-03-16T16:36:51.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2023-10-04T01:05:33.000Z (about 2 years ago)
- Last Synced: 2025-04-10T21:53:00.708Z (6 months ago)
- Topics: computer-vision, explainable-ai, pytorch, reinforcement-learning, saliency
- Language: Python
- Homepage: https://arxiv.org/abs/2006.03745
- Size: 5.11 MB
- Stars: 11
- Watchers: 1
- Forks: 1
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Differential Integrated Gradient
This is the implementation of the differential saliency method used in "[Re-understanding Finite-State Representations of Recurrent Policy Networks](https://arxiv.org/abs/2006.03745)", accepted to the **International Conference on Machine Learning (ICML) 2021**.
## Installation
* Python 3.5+
* To install dependencies:
```bash
pip install -r requirements.txt
```## Usage
You can use ```main_IG.py``` or ```main_IG_control.py``` for experimenting with Atari and Control Tasks from OpenAI Gym.To begin, you need to load and use models trained here: [MMN](https://github.com/koulanurag/mmn). Once you took all the steps, you end up with a MMN model, and that's what is needed in this repo. Trained models should be put into the ```inputs``` directory with a proper name.
Having the models, it's time to run the code. To do that, just run the following command to get the results for Atari games:
```
python main_IG.py --env_type=atari --input_index=43 --baseline_index=103 --env PongDeterministic-v4 --qbn_sizes 64 100 --gru_size 32
```
Values of the input arguments can be changed according to your interest.And the following command to get the results for control tasks:
```
python main_IG_control.py --env_type=classic_control --input_index=10 --baseline_index=106 --env CartPole-v1 --qbn_sizes 4 4 --gru_size 32
```Results will be saved into the ```results``` folder. In the repo, we have already provided sample results. For example, in the case of CartPole, an output will look like the following:
![]()
## Citation
If you find it useful in your research, please cite it with:
```
@inproceedings{danesh2021re,
title={Re-understanding Finite-State Representations of Recurrent Policy Networks},
author={Danesh, Mohamad H and Koul, Anurag and Fern, Alan and Khorram, Saeed},
booktitle={International Conference on Machine Learning},
pages={2388--2397},
year={2021},
organization={PMLR}
}
```