Abstract | We present a simple and yet effective interpolation-based regularization technique, aiming to improve the generalization of Graph Neural Networks (GNNs) on supervised graph classification. We leverage Mixup, an effective regularizer for vision, where random sample pairs and their labels are interpolated to create synthetic images for training. Unlike images with grid-like samples, graphs have arbitrary structure and topology, which can be very sensitive to any modification in terms of a graph's semantic meanings. This posts two unanswered questions: Can we directly mixing up a pair of graph inputs? If so, how well does such mixing strategy regularize the learning of GNNs? To answer these two questions, we propose ifMixup, which adds dummy nodes to make two graphs have the same input size and then directly interpolates the two graph inputs. We empirically show that such simple mixing strategy can effectively regularize the graph classification learning, resulting in superior predictive accuracy to popular graph augmentation baselines. |
---|