Memory-Associated Differential Learning

10 Feb 2021  ·  Yi Luo, Aiguo Chen, Bei Hui, Ke Yan ·

Conventional Supervised Learning approaches focus on the mapping from input features to output labels. After training, the learnt models alone are adapted onto testing features to predict testing labels in isolation, with training data wasted and their associations ignored. To take full advantage of the vast number of training data and their associations, we propose a novel learning paradigm called Memory-Associated Differential (MAD) Learning. We first introduce an additional component called Memory to memorize all the training data. Then we learn the differences of labels as well as the associations of features in the combination of a differential equation and some sampling methods. Finally, in the evaluating phase, we predict unknown labels by inferencing from the memorized facts plus the learnt differences and associations in a geometrically meaningful manner. We gently build this theory in unary situations and apply it on Image Recognition, then extend it into Link Prediction as a binary situation, in which our method outperforms strong state-of-the-art baselines on ogbl-ddi dataset.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Link Property Prediction ogbl-ddi MAD Learning Test Hits@20 0.6781 ± 0.0294 # 17
Validation Hits@20 0.7010 ± 0.0082 # 15
Number of params 1228897 # 23
Ext. data No # 1

Methods