We propose a simple and general method to regularize the fine-tuning of Transformer-based encoders for text classification tasks.
Recent advancements in open-domain question answering (ODQA), i. e., finding answers from large open-domain corpus like Wikipedia, have led to human-level performance on many datasets.
Secondly, even with large training data, the intent detection models can see a different distribution of test data when being deployed in the real world, leading to poor accuracy.
We propose a simple method to align multilingual contextual embeddings as a post-pretraining step for improved zero-shot cross-lingual transferability of the pretrained models.
A lot of progress has been made to improve question answering (QA) in recent years, but the special problem of QA over narrative book stories has not been explored in-depth.
In multi-party chat, it is common for multiple conversations to occur concurrently, leading to intermingled conversation threads in chat logs.
Out-of-domain (OOD) detection for low-resource text classification is a realistic but understudied task.
Most approaches to extraction multiple relations from a paragraph require multiple passes over the paragraph.
Ranked #10 on Relation Extraction on SemEval-2010 Task 8
We study few-shot learning in natural language domains.
We propose a new method to measure task similarities with cross-task transfer performance matrix for the deep learning scenario.
Many natural language understanding (NLU) tasks, such as shallow parsing (i. e., text chunking) and semantic slot filling, require the assignment of representative labels to the meaningful chunks in a sentence.