Neural language modeling of free word order argument structure

30 Nov 2019  ·  Charlotte Rochereau, Benoît Sagot, Emmanuel Dupoux ·

Neural language models trained with a predictive or masked objective have proven successful at capturing short and long distance syntactic dependencies. Here, we focus on verb argument structure in German, which has the interesting property that verb arguments may appear in a relatively free order in subordinate clauses. Therefore, checking that the verb argument structure is correct cannot be done in a strictly sequential fashion, but rather requires to keep track of the arguments' cases irrespective of their orders. We introduce a new probing methodology based on minimal variation sets and show that both Transformers and LSTM achieve a score substantially better than chance on this test. As humans, they also show graded judgments preferring canonical word orders and plausible case assignments. However, we also found unexpected discrepancies in the strength of these effects, the LSTMs having difficulties rejecting ungrammatical sentences containing frequent argument structure types (double nominatives), and the Transformers tending to overgeneralize, accepting some infrequent word orders or implausible sentences that humans barely accept.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here