We show similar result patterns on data extracted from an online concierge service.
Additionally, in initial user studies we observed that data programming may be an easier way for non-experts to create machine learning models when training data is limited or unavailable.
We introduce a novel framework for image captioning that can produce natural language explicitly grounded in entities that object detectors find in the image.
We introduce Baseline: a library for reproducible deep learning research and fast model development for NLP.
LANGUAGE MODELLING MACHINE TRANSLATION NAMED ENTITY RECOGNITION PART-OF-SPEECH TAGGING SLOT FILLING
The combination of better supervised data and a more appropriate high-capacity model enables much better relation extraction performance.
#11 best model for
Relation Extraction
on TACRED
KNOWLEDGE BASE POPULATION KNOWLEDGE GRAPHS RELATION EXTRACTION SLOT FILLING
Attention-based recurrent neural network models for joint intent detection and slot filling have achieved the state-of-the-art performance, while they have independent attention weights.
INTENT DETECTION SLOT FILLING SPOKEN DIALOGUE SYSTEMS SPOKEN LANGUAGE UNDERSTANDING
Attention-based encoder-decoder neural network models have recently shown promising results in machine translation and speech recognition.
This allows a single dialogue system to easily support a large number of services and facilitates simple integration of new services without requiring additional training data.
We generalize the widely-used Seq2Seq approach by conditioning responses on both conversation history and external "facts", allowing the model to be versatile and applicable in an open-domain setting.
A number of recent works have proposed techniques for end-to-end learning of communication protocols among cooperative multi-agent populations, and have simultaneously found the emergence of grounded human-interpretable language in the protocols developed by the agents, learned without any human supervision!