Frustratingly Easy Neural Domain Adaptation

COLING 2016  ·  Young-Bum Kim, Karl Stratos, Ruhi Sarikaya ·

Popular techniques for domain adaptation such as the feature augmentation method of Daum{\'e} III (2009) have mostly been considered for sparse binary-valued features, but not for dense real-valued features such as those used in neural networks. In this paper, we describe simple neural extensions of these techniques. First, we propose a natural generalization of the feature augmentation method that uses K + 1 LSTMs where one model captures global patterns across all K domains and the remaining K models capture domain-specific information. Second, we propose a novel application of the framework for learning shared structures by Ando and Zhang (2005) to domain adaptation, and also provide a neural extension of their approach. In experiments on slot tagging over 17 domains, our methods give clear performance improvement over Daum{\'e} III (2009) applied on feature-rich CRFs.

PDF Abstract COLING 2016 PDF COLING 2016 Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here