Variational Sequential Labelers for Semi-Supervised Learning

We introduce a family of multitask variational methods for semi-supervised sequence labeling. Our model family consists of a latent-variable generative model and a discriminative labeler. The generative models use latent variables to define the conditional probability of a word given its context, drawing inspiration from word prediction objectives commonly used in learning word embeddings. The labeler helps inject discriminative information into the latent space. We explore several latent variable configurations, including ones with hierarchical structure, which enables the model to account for both label-specific and word-specific information. Our models consistently outperform standard sequential baselines on 8 sequence labeling datasets, and improve further with unlabeled data.

PDF Abstract EMNLP 2018 PDF EMNLP 2018 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Named Entity Recognition (NER) CoNLL 2003 (English) VSL-GG-Hier F1 84.7 # 72

Methods


No methods listed for this paper. Add relevant methods here