The Effectiveness of Masked Language Modeling and Adapters for Factual Knowledge Injection

COLING (TextGraphs) 2022  ·  Sondre Wold ·

This paper studies the problem of injecting factual knowledge into large pre-trained language models. We train adapter modules on parts of the ConceptNet knowledge graph using the masked language modeling objective and evaluate the success of the method by a series of probing experiments on the LAMA probe. Mean P@K curves for different configurations indicate that the technique is effective, increasing the performance on subsets of the LAMA probe for large values of k by adding as little as 2.1% additional parameters to the original models.

PDF Abstract COLING (TextGraphs) 2022 PDF COLING (TextGraphs) 2022 Abstract

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.