Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/aras62/sf-gru
Pedestrian Action Anticipation using Contextual Feature Fusion in Stacked RNNs
https://github.com/aras62/sf-gru
action-anticipation autonomous-driving behavior-analysis computer-vision-algorithms prediction
Last synced: 5 days ago
JSON representation
Pedestrian Action Anticipation using Contextual Feature Fusion in Stacked RNNs
- Host: GitHub
- URL: https://github.com/aras62/sf-gru
- Owner: aras62
- License: mit
- Created: 2019-07-18T01:19:37.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2024-10-27T16:33:04.000Z (18 days ago)
- Last Synced: 2024-10-27T19:34:39.367Z (18 days ago)
- Topics: action-anticipation, autonomous-driving, behavior-analysis, computer-vision-algorithms, prediction
- Language: Python
- Size: 47.7 MB
- Stars: 44
- Watchers: 3
- Forks: 12
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# SF-GRU
This is the python implementation for paper **[A. Rasouli, I. Kotseruba, and J. K. Tsotsos, "Pedestrian Action Anticipation using Contextual Feature Fusion in Stacked RNNs", BMVC 2019.](https://arxiv.org/pdf/2005.06582)**
### Table of contents
* [Dependencies](#dependencies)
* [Datasets](#datasets)
* [Train](#train)
* [Test](#test)
* [Citation](#citation)
* [Authors](#authors)
* [License](#license)
## Dependencies
The interface is written and tested using python 3.5. The interface also requires
the following external libraries:
* tensorflow (tested with 1.9 and 1.14)
* keras (tested with 2.1 and 2.2)
* scikit-learn
* numpy
* pillow
## Datasets
The code is trained and tested with [PIE](http://data.nvision2.eecs.yorku.ca/PIE_dataset/) and [JAAD](http://data.nvision2.eecs.yorku.ca/JAAD_dataset/) datasets. We used the keras implementation of [open-pose](https://github.com/rachit2403/Open-Pose-Keras) to generate poses for the PIE dataset. They can be found at `data/features/pie/poses`.
## Train
A sample training script is provided below:```
from sf_gru import SFGRU
from pie_data import PIEdata_opts = { 'seq_type': 'crossing',
'data_split_type': 'random',
... }
imdb = PIE(data_path=)model_opts = {'obs_input_type': ['local_box', 'local_context', 'pose', 'box', 'speed'],
...}method_class = SFGRU()
beh_seq_train = imdb.generate_data_trajectory_sequence('train', **data_opts)
saved_files_path = method_class.train(beh_seq_train)```
`from pie_data import PIE` imports the data interface. Download the interface from the corresponding annotation repository.
`data_opts = { 'seq_type': 'crossing', ... }` specifies the data generation parameters form the dataset. Make sure that `seq_type` is set to `'crossing'`. Refer to `generate_data_trajectory_sequence()` method in corresponding interface for more information.
`model_opts = {...}` specifies how the training data should be prepared for the model. Refer to `sf_gru.py:get_data()` for more
information on how to set the parameters.
`method_class = SFGRU()` instantiates an object of type SFGRU.
`imdb.generate_data_trajectory_sequence()` generate data sequences from the dataset interface.
`method_class.train()` trains the model and returns the path to the folder where model and data processing
parameters are saved.A sample of training code can be found in `train_test.py`. All the default parameters in the script replicate the conditions in which the model was trained for the paper. Note that since `'random'` split data is used, the model may yield different performance at test time.
## Test
A sample test script is provided below
```
from sf_gru import SFGRU
from pie_data import PIEdata_opts = { 'seq_type': 'crossing',
'data_split_type': 'random',
... }
imdb = PIE(data_path=)method_class = SFGRU()
beh_seq_test = imdb.generate_data_trajectory_sequence('test', **data_opts)
saved_files_path =
acc , auc, f1, precision, recall = method_class.test(beh_seq_test, saved_files_path)
```
The procedure is similar to train with the exception that there is no need to specify `model_opts` as they Are
saved in the model folder at train time.
In the case only test is run without training, `saved_files_path` should be specified. It should be the path to the folder where model and training parameters are saved. The final model used for the paper can be found at `data/models/pie/sf-rnn`. Note that if test follows train, the path is returned by `train()` function.
`method_class.test()` test the performance of the model and return the results using the following 5 metrics `acc` (accuracy) , `auc` (area under curve), `f1`, `precision` and `recall`. A sample of training code can be found in `train_test.py`.
# Citation
If you use our dataset, please cite:
```
@inproceedings{rasouli2017they,
title={Pedestrian Action Anticipation using Contextual Feature Fusion in Stacked RNNs},
author={Rasouli, Amir and Kotseruba, Iuliia and Tsotsos, John K},
booktitle={BMVC},
year={2019}
}* **[Amir Rasouli](https://aras62.github.io/)**
* **[Iuliia Kotseruba](http://www.cse.yorku.ca/~yulia_k/)**Please send email to [email protected] or [email protected] if there are any problems with downloading or using the data.
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details