Human language understanding operates at multiple levels of granularity (e. g., words, phrases, and sentences) with increasing levels of abstraction that can be hierarchically combined.
In this work, we propose a reinforcement model to clarify ambiguous questions by suggesting refinements of the original query.
We present a novel two-stage distillation method for ranking problems that allows a smaller student model to be trained while benefitting from the better performance of the teacher model, providing better control of the inference latency and computational burden.
Utterance classification is a key component in many conversational systems.
We address the issue of having a limited number of annotations for stance classification in a new domain, by adapting out-of-domain classifiers with domain adaptation.
The long-term teacher draws on snapshots from several epochs ago in order to provide steadfast guidance and to guarantee teacher--student differences, while the short-term one yields more up-to-date cues with the goal of enabling higher-quality updates.
These observations suggest that our proposed method can seek the trade-off where both channel resources and customers' satisfaction are optimal.
Direct answering of questions that involve multiple entities and relations is a challenge for text-based QA.