With recent improvements in natural language generation (NLG) models for various applications, it has become imperative to have the means to identify and evaluate whether NLG output is only sharing verifiable information about the external world.
Compared to standard retrieval tasks, passage retrieval for conversational question answering (CQA) poses new challenges in understanding the current user question, as each question needs to be interpreted within the dialogue context.
At training time, additional inputs based on these evaluation measures are given to the dialogue model.
We study the problem of model extraction in natural language processing, in which an adversary with only query access to a victim model attempts to reconstruct a local copy of that model.
The attention layer in a neural network model provides insights into the model's reasoning behind its prediction, which are usually criticized for being opaque.
An earlier study of a collaborative chat intervention in a Massive Open Online Course (MOOC) identified negative effects on attrition stemming from a requirement for students to be matched with exactly one partner prior to beginning the activity.
We present a solution to the problem of paraphrase identification of questions.
Ranked #15 on Paraphrase Identification on Quora Question Pairs (Accuracy metric)