For natural language understanding (NLU) technology to be maximally useful, both practically and as a scientific object of study, it must be general: it must be able to process language in a way that is not exclusively tailored to any one specific task or dataset.
#16 best model for Natural Language Inference on MultiNLI
This paper summarises the experimental setup and results of the first shared task on end-to-end (E2E) natural language generation (NLG) in spoken dialogue systems.
#4 best model for Data-to-Text Generation on E2E NLG Challenge
A substantial thread of recent work on latent tree learning has attempted to develop neural network models with parse-valued latent variables and train them on non-parsing tasks, in the hope of having them discover interpretable tree structure.
This paper describes the enrichment of WebNLG corpus (Gardent et al., 2017a, b), with the aim to further extend its usefulness as a resource for evaluating common NLG tasks, including Discourse Ordering, Lexicalization and Referring Expression Generation.
However, the confidence of neural networks is not a robust measure of model uncertainty.
In this work, we investigate the task of textual response generation in a multimodal task-oriented dialogue system.