Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/KangchengHou/gntk
Implementation of Graph Neural Tangent Kernel (NeurIPS 2019)
https://github.com/KangchengHou/gntk
Last synced: 19 days ago
JSON representation
Implementation of Graph Neural Tangent Kernel (NeurIPS 2019)
- Host: GitHub
- URL: https://github.com/KangchengHou/gntk
- Owner: KangchengHou
- License: mit
- Created: 2019-09-08T14:44:23.000Z (about 5 years ago)
- Default Branch: master
- Last Pushed: 2020-01-28T00:56:42.000Z (almost 5 years ago)
- Last Synced: 2024-08-01T17:24:44.241Z (4 months ago)
- Language: Python
- Homepage:
- Size: 4.84 MB
- Stars: 100
- Watchers: 6
- Forks: 20
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-graph-classification - [Python Reference
README
# Graph Neural Tangent Kernel: Fusing Graph Neural Networks with Graph Kernels
This repository implements Graph Neural Tangent Kernel (infinitely wide multi-layer GNNs trained by gradient descent), described in the following paper:
Simon S. Du, Kangcheng Hou, Barnabás Póczos, Ruslan Salakhutdinov, Ruosong Wang, Keyulu Xu. Graph Neural Tangent Kernel: Fusing Graph Neural Networks with Graph Kernels. NeurIPS 2019. [[arXiv]](https://arxiv.org/abs/1905.13192) [[Paper]](https://papers.nips.cc/paper/8809-graph-neural-tangent-kernel-fusing-graph-neural-networks-with-graph-kernels)
## Test run
Unzip the dataset file
```
unzip dataset.zip
```Here we demonstrate how to use GNTK to perform classification on IMDB-BINARY dataset. We set the number of BLOCK operations to be 2, the number of MLP layers to be 2 and c_u to be 1.
Compute the GNTK gram matrix
```
mkdir out
python gram.py --dataset IMDBBINARY --num_mlp_layers 2 --num_layers 2 --scale uniform --jk 1 --out_dir out
```Classification with kernel regression
```
python search.py --data_dir ./out --dataset IMDBBINARY
```Therefore we get the hyper-parameter search results at `./out/grid_search.csv`.
## Experiment for all datasets
To run the experiment described in our paper, please run `bash run_gram.sh` and `bash run_search.sh`
in order.