# Semi-supervised sequence tagging with bidirectional language models

Matthew E. PetersWaleed AmmarChandra BhagavatulaRussell Power

Pre-trained word embeddings learned from unlabeled text have become a standard component of neural network architectures for NLP tasks. However, in most cases, the recurrent network that operates on word-level representations to produce context sensitive representations is trained on relatively little labeled data... (read more)

PDF Abstract