A list of awesome systems for graph neural network (GNN). If you have any comment, please create an issue or pull request.
- PyG: Graph Neural Network Library for PyTorch
- DGL: Python Package Built to Ease Deep Learning on Graph
- Graph Nets: Build Graph Nets in Tensorflow
- Euler: A Distributed Graph Deep Learning Framework
- StellarGraph: Machine Learning on Graphs
- Spektral: Graph Neural Networks with Keras and Tensorflow 2
- PGL: An Efficient and Flexible Graph Learning Framework Based on PaddlePaddle
- CogDL: An Extensive Toolkit for Deep Learning on Graphs
- DIG: A Turnkey Library for Diving into Graph Deep Learning Research
- Jraph: A Graph Neural Network Library in Jax
- Graph-Learn: An Industrial Graph Neural Network Framework
- DeepGNN: a Framework for Training Machine Learning Models on Large Scale Graph Data
Venue | Title | Affiliation | Link | Source |
---|---|---|---|---|
arXiv 2022 | Distributed Graph Neural Network Training: A Survey | BUPT | [paper] | |
arXiv 2022 | Parallel and Distributed Graph Neural Networks: An In-Depth Concurrency Analysis | ETHZ | [paper] | |
CSUR 2022 | Computing Graph Neural Networks: A Survey from Algorithms to Accelerators | UPC | [paper] |
Venue | Title | Affiliation | Link | Source |
---|---|---|---|---|
JMLR 2021 | DIG: A Turnkey Library for Diving into Graph Deep Learning Research | TAMU | [paper] | [code] |
arXiv 2021 | CogDL: A Toolkit for Deep Learning on Graphs | THU | [paper] | [code] |
CIM 2021 | Graph Neural Networks in TensorFlow and Keras with Spektral | Università della Svizzera italiana | [paper] | [code] |
arXiv 2019 | Deep Graph Library: A Graph-Centric, Highly-Performant Package for Graph Neural Networks | AWS | [paper] | [code] |
VLDB 2019 | AliGraph: A Comprehensive Graph Neural Network Platform | Alibaba | [paper] | [code] |
arXiv 2019 | Fast Graph Representation Learning with PyTorch Geometric | TU Dortmund University | [paper] | [code] |
arXiv 2018 | Relational Inductive Biases, Deep Learning, and Graph Networks | DeepMind | [paper] | [code] |
Venue | Title | Affiliation | Link | Source |
---|---|---|---|---|
MLSys 2022 | Understanding GNN Computational Graph: A Coordinated Computation, IO, and Memory Perspective | THU | [paper] | [code] |
HPDC 2022 | TLPGNN: A Lightweight Two-Level Parallelism Paradigm for Graph Neural Network Computation on GPU | GW | [paper] | |
IPDPS 2021 | FusedMM: A Unified SDDMM-SpMM Kernel for Graph Embedding and Graph Neural Networks | Indiana University Bloomington | [paper] | [code] |
SC 2020 | GE-SpMM: General-purpose Sparse Matrix-Matrix Multiplication on GPUs for Graph Neural Networks | THU | [paper] | [code] |
ICCAD 2020 | fuseGNN: Accelerating Graph Convolutional Neural Network Training on GPGPU | UCSB | [paper] | [code] |
IPDPS 2020 | PCGCN: Partition-Centric Processing for Accelerating Graph Convolutional Network | PKU | [paper] |
Venue | Title | Affiliation | Link | Source |
---|---|---|---|---|
MLSys 2022 | Graphiler: Optimizing Graph Neural Networks with Message Passing Data Flow Graph | ShanghaiTech | [paper] | [code] |
EuroSys 2021 | Seastar: Vertex-Centric Programming for Graph Neural Networks | CUHK | [paper] | |
SC 2020 | FeatGraph: A Flexible and Efficient Backend for Graph Neural Network Systems | Cornell | [paper] | [code] |
Venue | Title | Affiliation | Link | Source |
---|---|---|---|---|
VLDB 2022 | Sancus: Staleness-Aware Communication-Avoiding Full-Graph Decentralized Training in Large-Scale Graph Neural Networks | HKUST | [paper] | [code] |
MLSys 2022 | BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling | Rice, UIUC | [paper] | [code] |
MLSys 2022 | Sequential Aggregation and Rematerialization: Distributed Full-batch Training of Graph Neural Networks on Large Graphs | Intel | [paper] | [code] |
WWW 2022 | PaSca: A Graph Neural Architecture Search System under the Scalable Paradigm | PKU | [paper] | |
ICLR 2022 | PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication | Rice | [paper] | [code] |
ICLR 2022 | Learn Locally, Correct Globally: A Distributed Algorithm for Training Graph Neural Networks | PSU | [paper] | [code] |
arXiv 2021 | Distributed Hybrid CPU and GPU training for Graph Neural Networks on Billion-Scale Graphs | AWS | [paper] | |
SC 2021 | DistGNN: Scalable Distributed Training for Large-Scale Graph Neural Networks | Intel | [paper] | [code] |
SC 2021 | Efficient Scaling of Dynamic Graph Neural Networks | IBM | [paper] | |
CLUSTER 2021 | 2PGraph: Accelerating GNN Training over Large Graphs on GPU Clusters | NUDT | [paper] | |
OSDI 2021 |
|
MSR | [paper] | |
OSDI 2021 | Dorylus: Affordable, Scalable, and Accurate GNN Training with Distributed CPU Servers and Serverless Threads | UCLA | [paper] | [code] |
arXiv 2021 | GIST: Distributed Training for Large-Scale Graph Convolutional Networks | Rice | [paper] | |
EuroSys 2021 | FlexGraph: A Flexible and Efficient Distributed Framework for GNN Training | Alibaba | [paper] | |
EuroSys 2021 | DGCL: An Efficient Communication Library for Distributed GNN Training | CUHK | [paper] | [code] |
SC 2020 | Reducing Communication in Graph Neural Network Training | UC Berkeley | [paper] | [code] |
VLDB 2020 | G$^3$: When Graph Neural Networks Meet Parallel Graph Processing Systems on GPUs | NUS | [paper] | [code] |
IA3 2020 | DistDGL: Distributed Graph Neural Network Training for Billion-Scale Graphs | AWS | [paper] | [code] |
MLSys 2020 | Improving the Accuracy, Scalability, and Performance of Graph Neural Networks with Roc | Stanford | [paper] | [code] |
arXiv 2020 | AGL: A Scalable System for Industrial-purpose Graph Machine Learning | Ant Financial Services Group | [paper] | |
ATC 2019 | NeuGraph: Parallel Deep Neural Network Computation on Large Graphs | PKU | [paper] |
Venue | Title | Affiliation | Link | Source |
---|---|---|---|---|
EuroSys 2023 | Marius++: Large-Scale Training of Graph Neural Networks on a Single Machine | UW–Madison | [paper] | [code] |
VLDB 2022 | ByteGNN: Efficient Graph Neural Network Training at Large Scale | ByteDance | [paper] | |
ICML 2022 | GraphFM: Improving Large-Scale GNN Training via Feature Momentum | TAMU | [paper] | [code] |
ICML 2021 | GNNAutoScale: Scalable and Expressive Graph Neural Networks via Historical Embeddings | TU Dortmund University | [paper] | [code] |
OSDI 2021 | GNNAdvisor: An Adaptive and Efficient Runtime System for GNN Acceleration on GPUs | UCSB | [paper] | [code] |
Venue | Title | Affiliation | Link | Source |
---|---|---|---|---|
Neurocomputing 2022 | EPQuant: A Graph Neural Network Compression Approach Based on Product Quantization | ZJU | [paper] | [code] |
ICLR 2022 | EXACT: Scalable Graph Neural Networks Training via Extreme Activation Compression | Rice | [paper] | [code] |
PPoPP 2022 | QGTC: Accelerating Quantized Graph Neural Networks via GPU Tensor Core | UCSB | [paper] | [code] |
CVPR 2021 | Binary Graph Neural Networks | ICL | [paper] | [code] |
CVPR 2021 | Bi-GCN: Binary Graph Convolutional Network | Beihang University | [paper] | [code] |
EuroMLSys 2021 | Learned Low Precision Graph Neural Networks | Cambridge | [paper] | |
World Wide Web 2021 | Binarized Graph Neural Network | UTS | [paper] | |
ICLR 2021 | Degree-Quant: Quantization-Aware Training for Graph Neural Networks | Cambridge | [paper] | [code] |
ICTAI 2020 | SGQuant: Squeezing the Last Bit on Graph Neural Networks with Specialized Quantization | UCSB | [paper] | [code] |
Venue | Title | Affiliation | Link | Source |
---|---|---|---|---|
NSDI 2023 | BGL: GPU-Efficient GNN Training by Optimizing Graph Data I/O and Preprocessing | ByteDance | [paper] | |
MLSys 2022 | Accelerating Training and Inference of Graph Neural Networks with Fast Sampling and Pipelining | MIT | [paper] | [code] |
EuroSys 2022 | GNNLab: A Factored System for Sample-based GNN Training over GPUs | SJTU | [paper] | [code] |
KDD 2021 | Global Neighbor Sampling for Mixed CPU-GPU Training on Giant Graphs | UCLA | [paper] | |
PPoPP 2021 | Understanding and Bridging the Gaps in Current GNN Performance Optimizations | THU | [paper] | [code] |
VLDB 2021 | Large Graph Convolutional Network Training with GPU-Oriented Data Communication Architecture | UIUC | [paper] | [code] |
TPDS 2021 | Efficient Data Loader for Fast Sampling-Based GNN Training on Large Graphs | USTC | [paper] | [code] |
SoCC 2020 | PaGraph: Scaling GNN Training on Large Graphs via Computation-aware Caching | USTC | [paper] | [code] |
arXiv 2019 | TigerGraph: A Native MPP Graph Database | UCSD | [paper] |
Venue | Title | Affiliation | Link | Source |
---|---|---|---|---|
ISCA 2022 | Graphite: Optimizing Graph Neural Networks on CPUs Through Cooperative Software-Hardware Techniques | UIUC | [paper] | |
ISCA 2022 | Hyperscale FPGA-as-a-service architecture for large-scale distributed graph neural network | Alibaba | [paper] | |
arXiv 2021 | GCNear: A Hybrid Architecture for Efficient GCN Training with Near-Memory Processing | PKU | [paper] | |
DATE 2021 | ReGraphX: NoC-enabled 3D Heterogeneous ReRAM Architecture for Training Graph Neural Networks | WSU | [paper] | |
TCAD 2021 | Rubik: A Hierarchical Architecture for Efficient Graph Learning | Chinese Academy of Sciences | [paper] | |
FPGA 2020 | GraphACT: Accelerating GCN Training on CPU-FPGA Heterogeneous Platforms | USC | [paper] | [code] |
Venue | Title | Affiliation | Link | Source |
---|---|---|---|---|
JAIHC 2022 | DRGN: a dynamically reconfigurable accelerator for graph neural networks | XJTU | [paper] | |
DAC 2022 | GNNIE: GNN Inference Engine with Load-balancing and Graph-specific Caching | UMN | [paper] | |
IPDPS 2022 | Understanding the Design Space of Sparse/Dense Multiphase Dataflows for Mapping Graph Neural Networks on Spatial Accelerators | GaTech | [paper] | [code] |
arXiv 2022 | FlowGNN: A Dataflow Architecture for Universal Graph Neural Network Inference via Multi-Queue Streaming | GaTech | [paper] | |
CICC 2022 | StreamGCN: Accelerating Graph Convolutional Networks with Streaming Processing | UCLA | [paper] | |
HPCA 2022 | Accelerating Graph Convolutional Networks Using Crossbar-based Processing-In-Memory Architectures | HUST | [paper] | |
HPCA 2022 | GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design | Rice, PNNL | [paper] | [code] |
arXiv 2022 | GenGNN: A Generic FPGA Framework for Graph Neural Network Acceleration | GaTech | [paper] | |
DAC 2021 | DyGNN: Algorithm and Architecture Support of vertex Dynamic Pruning for Graph Neural Networks | Hunan University | [paper] | |
DAC 2021 | BlockGNN: Towards Efficient GNN Acceleration Using Block-Circulant Weight Matrices | PKU | [paper] | |
DAC 2021 | TARe: Task-Adaptive in-situ ReRAM Computing for Graph Learning | Chinese Academy of Sciences | [paper] | |
ICCAD 2021 | G-CoS: GNN-Accelerator Co-Search Towards Both Better Accuracy and Efficiency | Rice | [paper] | |
MICRO 2021 | I-GCN: A Graph Convolutional Network Accelerator with Runtime Locality Enhancement through Islandization | PNNL | [paper] | |
arXiv 2021 | ZIPPER: Exploiting Tile- and Operator-level Parallelism for General and Scalable Graph Neural Network Acceleration | SJTU | [paper] | |
TComp 2021 | EnGN: A High-Throughput and Energy-Efficient Accelerator for Large Graph Neural Networks | Chinese Academy of Sciences | [paper] | |
HPCA 2021 | GCNAX: A Flexible and Energy-efficient Accelerator for Graph Convolutional Neural Networks | GWU | [paper] | |
APA 2020 | GNN-PIM: A Processing-in-Memory Architecture for Graph Neural Networks | PKU | [paper] | |
ASAP 2020 | Hardware Acceleration of Large Scale GCN Inference | USC | [paper] | |
DAC 2020 | Hardware Acceleration of Graph Neural Networks | UIUC | [paper] | |
MICRO 2020 | AWB-GCN: A Graph Convolutional Network Accelerator with Runtime Workload Rebalancing | PNNL | [paper] | |
arXiv 2020 | GRIP: A Graph Neural Network Accelerator Architecture | Stanford | [paper] | |
HPCA 2020 | HyGCN: A GCN Accelerator with Hybrid Architecture | UCSB | [paper] |
We welcome contributions to this repository. To add new papers to this list, please update JSON files under ./res/papers/
. Our bots will update the paper list in README.md
automatically. The citations of newly added papers will be updated within one day.