https://github.com/ashleve/graph_classification
Benchmarking GNNs with PyTorch Lightning: Open Graph Benchmarks and image classification from superpixels
https://github.com/ashleve/graph_classification
graph-classification hydra image-classification ogbg open-graph-benchmark pytorch-lightning superpixels
Last synced: 10 months ago
JSON representation
Benchmarking GNNs with PyTorch Lightning: Open Graph Benchmarks and image classification from superpixels
- Host: GitHub
- URL: https://github.com/ashleve/graph_classification
- Owner: ashleve
- License: mit
- Created: 2020-12-05T02:07:52.000Z (about 5 years ago)
- Default Branch: main
- Last Pushed: 2022-07-18T11:28:40.000Z (over 3 years ago)
- Last Synced: 2025-03-26T18:52:33.747Z (11 months ago)
- Topics: graph-classification, hydra, image-classification, ogbg, open-graph-benchmark, pytorch-lightning, superpixels
- Language: Jupyter Notebook
- Homepage:
- Size: 1.07 MB
- Stars: 30
- Watchers: 1
- Forks: 6
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
## Description
This repository is supposed to be a place for curated, high quality benchmarks of Graph Neural Networks, implemented with PyTorch Lightning and Hydra.
Only datasets big enough to provide good measures are taken into consideration.
Built with [lightning-hydra-template](https://github.com/ashleve/lightning-hydra-template).
### Datasets
- [Open Graph Benchmarks](https://ogb.stanford.edu/docs/graphprop/) (graph property prediction)
- Image classification from superpixels (MNIST, FashionMNIST, CIFAR10)
## How to run
Install dependencies
```yaml
# clone project
git clone https://github.com/ashleve/graph_classification
cd graph_classification
# [OPTIONAL] create conda environment
conda create -n myenv python=3.8
conda activate myenv
# install pytorch and pytorch geometric according to instructions
# https://pytorch.org/get-started/
# https://pytorch-geometric.readthedocs.io/en/latest/notes/installation.html
# install requirements
pip install -r requirements.txt
```
Train model with default configuration
```yaml
# train on CPU
python run.py trainer.gpus=0
# train on GPU
python run.py trainer.gpus=1
```
Train model with chosen experiment configuration from [configs/experiment/](configs/experiment/)
```yaml
python run.py experiment=GAT/gat_ogbg_molpcba
python run.py experiment=GraphSAGE/graphsage_mnist_sp75
python run.py experiment=GraphSAGE/graphsage_cifar10_sp100
```
You can override any parameter from command line like this
```yaml
python run.py trainer.max_epochs=20 datamodule.batch_size=64
```
## Methodology
For each experiment, we run a series of 10 random hparams runs, and 5 optimization runs, using Optuna bayesian sampler. The hyperparameter search configs are available under [configs/hparams_search](configs/hparams_search).
After finding best hyperparameters, each experiment was repeated 5 times with different random seeds. The only exception are the `ogbg-molhiv` experiments, which were repeated 10 times each (because of high varience of results).
The results were averaged and reported in the table below.
## Results
| Architecture | MNIST-sp75 | FashionMNIST-sp75 | CIFAR10-sp100 | ogbg-molhiv | ogbg-molcpba |
| ------------ | ------------- | ----------------- | ------------- | ------------- | ------------- |
| GCN | 0.955 ± 0.014 | 0.835 ± 0.016 | 0.518 ± 0.007 | 0.755 ± 0.019 | 0.231 ± 0.003 |
| GIN | 0.966 ± 0.008 | 0.861 ± 0.012 | 0.512 ± 0.020 | 0.757 ± 0.025 | 0.240 ± 0.001 |
| GAT | 0.976 ± 0.008 | 0.889 ± 0.003 | 0.617 ± 0.005 | 0.751 ± 0.026 | 0.234 ± 0.003 |
| GraphSAGE | 0.981 ± 0.005 | 0.897 ± 0.012 | 0.629 ± 0.012 | 0.761 ± 0.025 | 0.256 ± 0.004 |
The `+-` denotes standard deviation across all seeds.