Coreference resolution is an important task for natural language understanding, and the resolution of ambiguous pronouns a longstanding challenge.
Recent work on the problem of latent tree learning has made it possible to train neural networks that learn to both parse a sentence and use the resulting parse to interpret the sentence, all without exposure to ground-truth parse trees at training time.
The interpretation of spatial references is highly contextual, requiring joint inference over both language and the environment.
A novel aspect of our technique is that each extracted word sense is accompanied by one of about 2000 "discourse atoms" that gives a succinct description of which other words co-occur with that word sense.
To tackle the sentiment classification problem in low-resource languages without adequate annotated data, we propose an Adversarial Deep Averaging Network (ADAN) to transfer the knowledge learned from labeled data on a resource-rich source language to low-resource languages where only unlabeled data exists.
We consider the task of fine-grained sentiment analysis from the perspective of multiple instance learning (MIL).
Word segmentation is a low-level NLP task that is non-trivial for a considerable number of languages.
A context-aware language model uses location, user and/or domain metadata (context) to adapt its predictions.