Accepted Papers

Redundancy-Free Computation for Graph Neural Networks

Zhihao Jia: Stanford Universiity; Sina Liin: Microsoft; Rex Ying: Stanford University; Jiaxuan You: Stanford University; Alexandra Porter: Stanford University; Jure Leskovec: Stanford University; Alex Aiken: Stanford University


Download

Graph Neural Networks (GNNs) are based on repeated aggregations of information from nodes’ neighbors in a graph. However, because nodes share many neighbors, a naive implementation leads to repeated and inefficient aggregations and represents significant computational overhead. Here we propose Hierarchically Aggregated computation Graphs(HAGs), a new GNN representation technique that explicitly avoids redundancy by managing intermediate aggregation results hierarchically and eliminates repeated computations and unnecessary data transfers in GNN training and inference. HAGs perform the same computations and give the same models/accuracy as traditional GNNs, but in a much shorter time dueto optimized computations. To identify redundant computations,we introduce an accurate cost function and use a novel search algorithm to find optimized HAGs. Experiments show that the HAG representation significantly outperforms the standard GNN by increasing the end-to-end training throughput by up to 2.8× and reducing the aggregations and data transfers in GNN training byup to 6.3× and 5.6×, with only 0.1% memory overhead. Overall,our results represent an important advancement in speeding-up and scaling-up GNNs without any loss in model predictive performance.

How can we assist you?

We'll be updating the website as information becomes available. If you have a question that requires immediate attention, please feel free to contact us. Thank you!

Please enter the word you see in the image below: