Representative selection (RS) is the problem of finding a small subset of exemplars from an unlabeled dataset, and has numerous applications in summarization, active learning, data compression and many other domains.
Task transfer, transferring knowledge contained in related tasks, holds the promise of reducing the quantity of labeled data required to fine-tune language models.
Interactive recommender systems (RSs) allow users to express intent, preferences and contexts in a rich fashion, often using natural language.
Through a user preference study, we demonstrate that the oracle behavior of our proposed system that provides responses based on presupposition failure is preferred over the oracle behavior of existing QA systems.
Pretrained Language Models (LMs) have been shown to possess significant linguistic, common sense, and factual knowledge.