Semi-supervised Multitask Learning for Sequence Labeling

ACL 2017  ·  Marek Rei ·

We propose a sequence labeling framework with a secondary training objective, learning to predict surrounding words for every word in the dataset. This language modeling objective incentivises the system to learn general-purpose patterns of semantic and syntactic composition, which are also useful for improving accuracy on different sequence labeling tasks. The architecture was evaluated on a range of datasets, covering the tasks of error detection in learner texts, named entity recognition, chunking and POS-tagging. The novel language modeling objective provided consistent performance improvements on every benchmark, without requiring any additional annotated or unannotated data.

PDF Abstract ACL 2017 PDF ACL 2017 Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Grammatical Error Detection CoNLL-2014 A1 Bi-LSTM + LMcost (trained on FCE) F0.5 17.86 # 6
Grammatical Error Detection CoNLL-2014 A2 Bi-LSTM + LMcost (trained on FCE) F0.5 25.88 # 7
Grammatical Error Detection FCE Bi-LSTM + LMcost F0.5 48.48 # 4
Part-Of-Speech Tagging Penn Treebank Bi-LSTM + LMcost Accuracy 97.43 # 15


No methods listed for this paper. Add relevant methods here