Publications
* denotes equal contribution and joint lead authorship.
2022
A Systematic Analysis of Regularising Terms for Neural Link Prediction Models
Preprint, 2022
Regularisers are instrumental in improving the generalisation accuracy of neural link prediction models. In this paper, we systematically analyse several regularisation methods for factorisation-based neural link predictors and evaluate how they impact the downstream link prediction accuracy. We consider multiple methods for regularising neural link predictors, including norm-based regularisers, gradient penalties, auxiliary training objectives, and manifold regularisation. We conduct extensive experiments on three datasets, namely WN18RR, FB15k-237 and Yago3-10. In our analysis, we find both gradient penalty and auxiliary training objectives can improve the generalisation properties of neural link predictors when training together with L2 regularisation, yielding up to a 4.6% increase in MRR, 4.9% in Hits@1, and 8.3% in Hits@10 when using ComplEx. The auxiliary training objectives are effective when the data is insufficient or the model is complex. On the other hand, we only observe marginal improvements when using the nuclear 3-norm.