End-to-end Deep Reinforcement Learning Based Coreference Resolution

ACL 2019  ·  Hongliang Fei, Xu Li, Dingcheng Li, Ping Li ·

Recent neural network models have significantly advanced the task of coreference resolution. However, current neural coreference models are usually trained with heuristic loss functions that are computed over a sequence of local decisions... In this paper, we introduce an end-to-end reinforcement learning based coreference resolution model to directly optimize coreference evaluation metrics. Specifically, we modify the state-of-the-art higher-order mention ranking approach in Lee et al. (2018) to a reinforced policy gradient model by incorporating the reward associated with a sequence of coreference linking actions. Furthermore, we introduce maximum entropy regularization for adequate exploration to prevent the model from prematurely converging to a bad local optimum. Our proposed model achieves new state-of-the-art performance on the English OntoNotes v5.0 benchmark. read more

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Coreference Resolution CoNLL 2012 reinforced model + ELMO Avg F1 73.8 # 8
Coreference Resolution OntoNotes Reinforced + ELMo F1 73.8 # 6

Methods