Abstract: Knowledge distillation (KD), as an effective compression technology, is used to reduce the resource consumption of graph neural networks (GNNs) and facilitate their deployment on ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results
Feedback