Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/Oceanusity/awesome-gnns-on-large-scale-graphs


https://github.com/Oceanusity/awesome-gnns-on-large-scale-graphs

List: awesome-gnns-on-large-scale-graphs

Last synced: 3 months ago
JSON representation

Awesome Lists containing this project

README

        

# Awesome gnns on large-scale graphs

Papers about methods or graph neural networks (GNNs) on large-scale graphs. Aiming to solve the memory bottleneck problem of GNNs on large-scale graphs, many training strategies such as node-wise, layer-wise, and subgraph sampling are widely explored. In addition, there are also some works designing specific GNNs to solve this problem.

**Welcome to submit a pull request to add more awesome papers.**

* [Survey] A Survey on Graph Neural Network Acceleration: Algorithms, Systems, and Customized Hardware. [[paper](https://arxiv.org/pdf/2306.14052.pdf)]
* [Tutorial] Large-Scale Graph Neural Networks: The Past and New Frontiers. [[homepage](https://sites.google.com/ncsu.edu/gnnkdd2023tutorial/home)]

2023

----
* [ICML 2023] LazyGNN: Large-Scale Graph Neural Networks via Lazy Propagation. [[paper](https://arxiv.org/abs/2302.01503)]
* [ICLR 2023] MLPInit: Embarrassingly Simple GNN Training Acceleration with MLP Initialization. [[paper](https://arxiv.org/pdf/2210.00102.pdf)]
* [ICLR 2023] LMC: Fast Training of GNNs via Subgraph Sampling with Provable Convergence. [[openreview](https://openreview.net/forum?id=5VBBA91N6n)]

2022

----
* [ICML 2022] GraphFM: Improving Large-Scale GNN Training via Feature Momentum. [[paper](https://arxiv.org/abs/2206.07161)]
* [ICML 2022] Generalization Guarantee of Training Graph Convolutional Networks with Graph Topology Sampling. [[paper](https://arxiv.org/abs/2207.03584)]
* [ICLR 2022] PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication. [[paper](https://openreview.net/forum?id=kSwqMH0zn1F)] [[code](https://github.com/RICE-EIC/PipeGCN)]
* [ICLR 2022] EXACT: Scalable Graph Neural Networks Training via Extreme Activation Compression. [[paper](https://openreview.net/forum?id=vkaMaq95_rX)] [[code](https://github.com/warai-0toko/Exact)]
* [VLDB 2022] SANCUS: Staleness-Aware Communication-Avoiding Full-Graph Decentralized Training in Large-Scale Graph Neural Networks. [[paper](https://dl.acm.org/doi/10.14778/3538598.3538614)] [[code](https://github.com/chenzhao/light-dist-gnn)]
* [NeruIPS 2022 Datasets and Benchmarks Track] A Comprehensive Study on Large-Scale Graph Training: Benchmarking and Rethinking. [[paper](https://arxiv.org/pdf/2210.07494.pdf)] [[code](https://github.com/VITA-Group/Large_Scale_GCN_Benchmarking)]

2021

----
* [NeurIPS 2021] Decoupling the Depth and Scope of Graph Neural Networks. [[paper](https://openreview.net/forum?id=_IY3_4psXuf)] [[code](https://github.com/facebookresearch/shaDow_GNN)]
* [NeurIPS 2021] VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using Vector Quantization. [[paper](https://arxiv.org/abs/2110.14363)] [[code](https://github.com/devnkong/VQ-GNN)]
* [ICLR 2021] Combining Label Propagation and Simple Models Out-performs Graph Neural Networks. [[paper](https://arxiv.org/abs/2010.13993)] [[code](https://github.com/CUAI/CorrectAndSmooth)]
* [KDD 2021] Scaling Up Graph Neural Networks Via Graph Coarsening. [[paper](https://arxiv.org/pdf/2106.05150.pdf)] [[code](https://github.com/szzhang17/Scaling-Up-Graph-Neural-Networks-Via-Graph-Coarsening)]
* [ICML 2021] GNNAutoScale: Scalable and Expressive Graph Neural Networks via Historical Embeddings. [[paper](https://arxiv.org/abs/2106.05609)] [[code](https://github.com/rusty1s/pyg_autoscale)]

2020

----

* [ICLR 2020] GraphSAINT: Graph Sampling Based Inductive Learning Method. [[paper](https://arxiv.org/abs/1907.04931)] [[code](https://github.com/GraphSAINT/GraphSAINT)]
* [KDD 2020] Minimal Variance Sampling with Provable Guarantees for Fast Training of Graph Neural Networks. [[paper](https://arxiv.org/abs/2006.13866)] [[code](https://github.com/CongWeilin/mvs_gcn)]
* [KDD 2020] Scaling Graph Neural Networks with Approximate PageRank. [[paper](https://arxiv.org/abs/2007.01570)] [[TensorFlow](https://github.com/TUM-DAML/pprgo_tensorflow)] [[PyTorch](https://github.com/TUM-DAML/pprgo_pytorch)] [[web](https://www.in.tum.de/daml/pprgo/)]
* [ICML Workshop 2020] SIGN: Scalable Inception Graph Networks. [[paper](https://arxiv.org/abs/2004.11198)] [[code](https://github.com/twitter-research/sign)]
* [ICML 2020] Simple and Deep Graph Convolutional Networks. [[paper](https://arxiv.org/abs/2007.02133)] [[code](https://github.com/chennnM/GCNII)]
* [NeurIPS 2020] Scalable Graph Neural Networks via Bidirectional Propagation. [[paper](https://arxiv.org/abs/2010.15421)] [[code](https://github.com/chennnM/GBP)]

2019

---

* [ICLR 2019] Predict then Propagate: Graph Neural Networks meet Personalized PageRank. [[paper](https://arxiv.org/abs/1810.05997)] [[code](https://github.com/benedekrozemberczki/APPNP)]
* [KDD 2019] Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks. [[paper](https://arxiv.org/abs/1905.07953)] [[TensorFlow](https://github.com/google-research/google-research/tree/34444253e9f57cd03364bc4e50057a5abe9bcf17/cluster_gcn)] [[PyTorch](https://github.com/benedekrozemberczki/ClusterGCN)]
* [ICML 2019] Simplifying Graph Convolution Networks. [[paper](https://arxiv.org/abs/1902.07153)] [[code](https://github.com/Tiiiger/SGC)]
* [NeurIPS 2019] Layer-Dependent Importance Sampling for Training Deep and Large Graph Convolutional Networks. [[paper](https://arxiv.org/abs/1911.07323)] [[code](https://github.com/acbull/LADIES)]

2018

----

* [ICLR 2018] FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling. [[paper](https://arxiv.org/abs/1801.10247)] [[code](https://github.com/matenure/FastGCN)]
* [KDD 2018] Large-Scale Learnable Graph Convolutional Networks. [[paper](https://arxiv.org/abs/1808.03965)] [[code](https://github.com/divelab/lgcn)]
* [ICML 2018] Stochastic Training of Graph Convolutional Networks with Variance Reduction. [[paper](https://arxiv.org/abs/1710.10568)] [[code](https://github.com/thu-ml/stochastic_gcn)]
* [NeurIPS 2018] Adaptive Sampling Towards Fast Graph Representation Learning. [[paper](https://arxiv.org/abs/1809.05343)] [[code](https://github.com/huangwb/AS-GCN)]

2017

---

* [NIPS 2017] Inductive Representation Learning on Large Graphs. [[paper](https://arxiv.org/abs/1706.02216)] [[code](https://github.com/williamleif/GraphSAGE)]