End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF

ACL 2016  ·  Xuezhe Ma, Eduard Hovy ·

State-of-the-art sequence labeling systems traditionally require large amounts of task-specific knowledge in the form of hand-crafted features and data pre-processing. In this paper, we introduce a novel neutral network architecture that benefits from both word- and character-level representations automatically, by using combination of bidirectional LSTM, CNN and CRF. Our system is truly end-to-end, requiring no feature engineering or data pre-processing, thus making it applicable to a wide range of sequence labeling tasks. We evaluate our system on two data sets for two sequence labeling tasks --- Penn Treebank WSJ corpus for part-of-speech (POS) tagging and CoNLL 2003 corpus for named entity recognition (NER). We obtain state-of-the-art performance on both the two data --- 97.55\% accuracy for POS tagging and 91.21\% F1 for NER.

PDF Abstract ACL 2016 PDF ACL 2016 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Named Entity Recognition (NER) CoNLL++ BiLSTM-CNN-CRF F1 91.87 # 7
Named Entity Recognition (NER) CoNLL 2003 (English) BLSTM-CNN-CRF F1 91.21 # 67
Part-Of-Speech Tagging Penn Treebank BLSTM-CNN-CRF Accuracy 97.55 # 10

Methods