Accepted Papers
TinyGNN: Learning Efficient Graph Neural Networks
Bencheng Yan: Tsinghua University; Chaokun Wang: Tsinghua University; Gaoyang Guo: Tsinghua University; Yunkai Lou: Tsinghua University
Recently, Graph Neural Networks (GNNs) arouse a lot of research interest and achieve great success in dealing with graph-based data. The basic idea of GNNs is to aggregate neighbor information iteratively. After k iterations, a k-layer GNN can capture nodes’ k-hop local structure. In this way, a deeper GNN can access much more neighbor information leading to better performance. However, when a GNN goes deeper, the exponential expansion of neighborhoods incurs expensive computations in batched training and inference. This takes the deeper GNN away from many applications, e.g., real-time systems. In this paper, we try to learn a small GNN (called TinyGNN), which can achieve high performance and infer the node representation in a short time. However, since a small GNN cannot explore as much local structure as a deeper GNN does, there exists a neighbor information gap between the deeper GNN and the small GNN. To address this problem, we leverage peer node information to model the local structure explicitly and adopt a neighbor distillation strategy to learn local structure knowledge from a deeper GNN implicitly. Extensive experimental results demonstrate that TinyGNN is empirically effective and achieves similar or even better performance compared with the deeper GNNs. Meanwhile, TinyGNN gains a 7.73x—126.59x speed-up on inference over all data sets.
How can we assist you?
We'll be updating the website as information becomes available. If you have a question that requires immediate attention, please feel free to contact us. Thank you!
Please enter the word you see in the image below: