For natural language understanding (NLU) technology to be maximally useful, both practically and as a scientific object of study, it must be general: it must be able to process language in a way that is not exclusively tailored to any one specific task or dataset.
#16 best model for Natural Language Inference on MultiNLI
This paper summarises the experimental setup and results of the first shared task on end-to-end (E2E) natural language generation (NLG) in spoken dialogue systems.
#3 best model for Data-to-Text Generation on E2E NLG Challenge
A substantial thread of recent work on latent tree learning has attempted to develop neural network models with parse-valued latent variables and train them on non-parsing tasks, in the hope of having them discover interpretable tree structure.
However, the confidence of neural networks is not a robust measure of model uncertainty.
This paper describes the enrichment of WebNLG corpus, with the aim to further extend its usefulness as a resource for evaluating common NLG tasks, including Discourse Ordering, Lexicalization and Referring Expression Generation.
In this work, we investigate the task of textual response generation in a multimodal task-oriented dialogue system.