Search Results for author: Xinya Du

Found 12 papers, 7 papers with code

QA-Driven Zero-shot Slot Filling with Weak Supervision Pretraining

no code implementations ACL 2021 Xinya Du, Luheng He, Qi Li, Dian Yu, Panupong Pasupat, Yuan Zhang

To address this problem, we introduce QA-driven slot filling (QASF), which extracts slot-filler spans from utterances with a span-based QA model.

Zero-shot Slot Filling

Template Filling with Generative Transformers

1 code implementation NAACL 2021 Xinya Du, Alexander Rush, Claire Cardie

Template filling is generally tackled by a pipeline of two separate supervised systems {--} one for role-filler extraction and another for template/event recognition.

Few-shot Intent Classification and Slot Filling with Retrieved Examples

no code implementations NAACL 2021 Dian Yu, Luheng He, Yuan Zhang, Xinya Du, Panupong Pasupat, Qi Li

Few-shot learning arises in important practical scenarios, such as when a natural language understanding system needs to learn new semantic labels for an emerging, resource-scarce domain.

Few-Shot Learning General Classification +3

Event Extraction by Answering (Almost) Natural Questions

1 code implementation EMNLP 2020 Xinya Du, Claire Cardie

The problem of event extraction requires detecting the event trigger and extracting its corresponding arguments.

Event Extraction Question Answering +1

Be Consistent! Improving Procedural Text Comprehension using Label Consistency

1 code implementation NAACL 2019 Xinya Du, Bhavana Dalvi Mishra, Niket Tandon, Antoine Bosselut, Wen-tau Yih, Peter Clark, Claire Cardie

Our goal is procedural text comprehension, namely tracking how the properties of entities (e. g., their location) change with time given a procedural text (e. g., a paragraph about photosynthesis, a recipe).

Reading Comprehension

Harvesting Paragraph-Level Question-Answer Pairs from Wikipedia

1 code implementation ACL 2018 Xinya Du, Claire Cardie

We study the task of generating from Wikipedia articles question-answer pairs that cover content beyond a single sentence.

Question Generation

Identifying Where to Focus in Reading Comprehension for Neural Question Generation

no code implementations EMNLP 2017 Xinya Du, Claire Cardie

A first step in the task of automatically generating questions for testing reading comprehension is to identify \textit{question-worthy} sentences, i. e. sentences in a text passage that humans find it worthwhile to ask questions about.

Dependency Parsing Machine Translation +6

Cannot find the paper you are looking for? You can Submit a new open access paper.