Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/uzh-rpg/aegnn
https://github.com/uzh-rpg/aegnn
Last synced: about 5 hours ago
JSON representation
- Host: GitHub
- URL: https://github.com/uzh-rpg/aegnn
- Owner: uzh-rpg
- License: mit
- Created: 2021-02-16T09:33:07.000Z (over 3 years ago)
- Default Branch: master
- Last Pushed: 2022-05-16T16:23:39.000Z (over 2 years ago)
- Last Synced: 2024-05-07T18:19:28.799Z (6 months ago)
- Language: Python
- Size: 23.7 MB
- Stars: 92
- Watchers: 12
- Forks: 25
- Open Issues: 13
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# AEGNN: Asynchronous Event-based Graph Neural Networks
This repository contains code from our 2022 CVPR paper [**AEGNN: Asynchronous Event-based Graph Neural Networks**](http://rpg.ifi.uzh.ch/docs/CVPR22_Schaefer.pdf) by Simon Schaefer*, [Daniel Gehrig*](https://danielgehrig18.github.io/), and [Davide Scaramuzza](http://rpg.ifi.uzh.ch/people_scaramuzza.html). If you use our code or refer to this project, please cite it using
```
@inproceedings{Schaefer22cvpr,
author = {Schaefer, Simon and Gehrig, Daniel and Scaramuzza, Davide},
title = {AEGNN: Asynchronous Event-based Graph Neural Networks},
booktitle = {IEEE Conference on Computer Vision and Pattern Recognition},
year = {2022}
}
```## Installation
The code heavily depends on PyTorch and the [PyG](https://github.com/pyg-team/pytorch_geometric) framework, which is
optimized only for GPUs supporting CUDA. For our implementation the CUDA version 11.3 is used. Install the project
requirements with:
```
conda env create --file=environment.yml
```## Processing Pipeline
We evaluated our approach on three datasets. [NCars](http://www.prophesee.ai/dataset-n-cars/),
[NCaltech101](https://www.garrickorchard.com/datasets/n-caltech101) and
[Prophesee Gen1 Automotive](https://www.prophesee.ai/2020/01/24/prophesee-gen1-automotive-detection-dataset/).
Download them and extract them. By default, they are assumed to be in `/data/storage/`, this can be changed by setting
the `AEGNN_DATA_DIR` environment variable.### Pre-Processing
To efficiently train the graph neural networks, the event graph is generated offline during pre-processing. For
specific instructions about the data structure and data pre-processing, please refer to the
[dataset's readme](aegnn/datasets/README.md).## Asynchronous & Sparse Pipeline
The code allows to make **any graph-based convolutional model** asynchronous & sparse, with a simple command and without
the need to change the model's definition or forward function.
```
>>> import aegnn
>>> model = GraphConvModel()
>>> model = aegnn.asyncronous.make_model_asynchronous(model, **kwargs)
```
We support all graph convolutional layers, max pooling, linear layers and more. As each layer is independently
transformed to work asynchronously and sparsely, if there is a layer, that we do not support, its dense equivalent
is used instead.## Evaluation
We support automatic flops and runtime analysis, by using hooking each layer's forward pass. Similar to the
`make_model_asynchronous()` function, among other, all graph-based convolutional layers, the linear layer and
batch normalization are supported. As an example, to run an analysis of our model on the
NCars dataset, you can use:
```
python3 aegnn/evaluation/flops.py --device X
```## Contributing
If you spot any bugs or if you are planning to contribute back bug-fixes, please open an issue and
discuss the feature with us.