85 papers with code • 1 benchmarks • 1 datasets
In consideration of intrinsic consistency between informativeness of the regions and their probability being ground-truth class, we design a novel training paradigm, which enables Navigator to detect most informative regions under the guidance from Teacher.
Generating Informative and Diverse Conversational Responses via Adversarial Information Maximization
Responses generated by neural conversational models tend to lack informativeness and diversity.
In this paper we propose a neural recommendation approach with personalized attention to learn personalized representations of users and items from reviews.
Unsupervised Abstractive Meeting Summarization with Multi-Sentence Compression and Budgeted Submodular Maximization
We introduce a novel graph-based framework for abstractive meeting speech summarization that is fully unsupervised and does not rely on any annotations.
Open domain response generation has achieved remarkable progress in recent years, but sometimes yields short and uninformative responses.
Natural language generation (NLG) is a critical component of spoken dialogue and it has a significant impact both on usability and perceived quality.
Many popular form factors of digital assistants---such as Amazon Echo, Apple Homepod, or Google Home---enable the user to hold a conversation with these systems based only on the speech modality.
Our results show that networks trained to regress to the ground truth targets for labeled data and to simultaneously learn to rank unlabeled data obtain significantly better, state-of-the-art results for both IQA and crowd counting.
We make two theoretical contributions to disentanglement learning by (a) defining precise semantics of disentangled representations, and (b) establishing robust metrics for evaluation.