Pretraining on Non-linguistic Structure as a Tool for Analyzing Learning Bias in Language Models

30 Apr 2020Isabel PapadimitriouDan Jurafsky

We propose a novel methodology for analyzing the encoding of grammatical structure in neural language models through transfer learning. We test how a language model can leverage its internal representations to transfer knowledge across languages and symbol systems... (read more)

PDF Abstract


No code implementations yet. Submit your code now

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.