Search Results for author: Qiang Ning

Found 36 papers, 12 papers with code

ESTER: A Machine Reading Comprehension Dataset for Reasoning about Event Semantic Relations

no code implementations EMNLP 2021 Rujun Han, I-Hung Hsu, Jiao Sun, Julia Baylon, Qiang Ning, Dan Roth, Nanyun Peng

While these tasks partially evaluate machines’ ability of narrative understanding, human-like reading comprehension requires the capability to process event-based information beyond arguments and temporal reasoning.

Machine Reading Comprehension Natural Language Queries +1

A Meta-framework for Spatiotemporal Quantity Extraction from Text

no code implementations ACL 2022 Qiang Ning, Ben Zhou, Hao Wu, Haoruo Peng, Chuchu Fan, Matt Gardner

News events are often associated with quantities (e. g., the number of COVID-19 patients or the number of arrests in a protest), and it is often important to extract their type, time, and location from unstructured text in order to analyze these quantity events.

From Instructions to Constraints: Language Model Alignment with Automatic Constraint Verification

no code implementations10 Mar 2024 Fei Wang, Chao Shang, Sarthak Jain, Shuai Wang, Qiang Ning, Bonan Min, Vittorio Castelli, Yassine Benajiba, Dan Roth

We investigate common constraints in NLP tasks, categorize them into three classes based on the types of their arguments, and propose a unified framework, ACT (Aligning to ConsTraints), to automatically produce supervision signals for user alignment with constraints.

Abstractive Text Summarization Entity Typing +2

PInKS: Preconditioned Commonsense Inference with Minimal Supervision

1 code implementation16 Jun 2022 Ehsan Qasemi, Piyush Khanna, Qiang Ning, Muhao Chen

Reasoning with preconditions such as "glass can be used for drinking water unless the glass is shattered" remains an open problem for language models.

Informativeness

Answer Consolidation: Formulation and Benchmarking

1 code implementation NAACL 2022 Wenxuan Zhou, Qiang Ning, Heba Elfardy, Kevin Small, Muhao Chen

Current question answering (QA) systems primarily consider the single-answer scenario, where each question is assumed to be paired with one correct answer.

Benchmarking Question Answering

Event-Centric Natural Language Processing

no code implementations ACL 2021 Muhao Chen, Hongming Zhang, Qiang Ning, Manling Li, Heng Ji, Kathleen McKeown, Dan Roth

This tutorial targets researchers and practitioners who are interested in AI technologies that help machines understand natural language text, particularly real-world events described in the text.

SPARTQA: A Textual Question Answering Benchmark for Spatial Reasoning

2 code implementations NAACL 2021 Roshanak Mirzaee, Hossein Rajaby Faghihi, Qiang Ning, Parisa Kordjamshidi

This paper proposes a question-answering (QA) benchmark for spatial reasoning on natural language text which contains more realistic spatial phenomena not covered by prior work and is challenging for state-of-the-art language models (LM).

Question Answering

Event Time Extraction and Propagation via Graph Attention Networks

1 code implementation NAACL 2021 Haoyang Wen, Yanru Qu, Heng Ji, Qiang Ning, Jiawei Han, Avi Sil, Hanghang Tong, Dan Roth

Grounding events into a precise timeline is important for natural language understanding but has received limited attention in recent work.

Graph Attention Natural Language Understanding +3

ESTER: A Machine Reading Comprehension Dataset for Event Semantic Relation Reasoning

1 code implementation16 Apr 2021 Rujun Han, I-Hung Hsu, Jiao Sun, Julia Baylon, Qiang Ning, Dan Roth, Nanyun Peng

While these tasks partially evaluate machines' ability of narrative understanding, human-like reading comprehension requires the capability to process event-based information beyond arguments and temporal reasoning.

Machine Reading Comprehension Natural Language Queries +2

SpartQA: : A Textual Question Answering Benchmark for Spatial Reasoning

1 code implementation12 Apr 2021 Roshanak Mirzaee, Hossein Rajaby Faghihi, Qiang Ning, Parisa Kordjmashidi

This paper proposes a question-answering (QA) benchmark for spatial reasoning on natural language text which contains more realistic spatial phenomena not covered by prior work and is challenging for state-of-the-art language models (LM).

Question Answering

Temporal Reasoning on Implicit Events from Distant Supervision

no code implementations NAACL 2021 Ben Zhou, Kyle Richardson, Qiang Ning, Tushar Khot, Ashish Sabharwal, Dan Roth

We propose TRACIE, a novel temporal reasoning dataset that evaluates the degree to which systems understand implicit events -- events that are not mentioned explicitly in natural language text but can be inferred from it.

Natural Language Inference

Evaluating NLP Models via Contrast Sets

no code implementations1 Oct 2020 Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, A. Zhang, Ben Zhou

Unfortunately, when a dataset has systematic gaps (e. g., annotation artifacts), these evaluations are misleading: a model can learn simple decision rules that perform well on the test set but do not capture a dataset's intended capabilities.

Reading Comprehension Sentiment Analysis

Learnability with Indirect Supervision Signals

no code implementations NeurIPS 2020 Kaifu Wang, Qiang Ning, Dan Roth

Learning from indirect supervision signals is important in real-world AI applications when, often, gold labels are missing or too costly.

Generalization Bounds Multi-class Classification

Foreseeing the Benefits of Incidental Supervision

2 code implementations EMNLP 2021 Hangfeng He, Mingyuan Zhang, Qiang Ning, Dan Roth

Real-world applications often require improved models by leveraging a range of cheap incidental supervision signals.

Informativeness Learning Theory +4

TORQUE: A Reading Comprehension Dataset of Temporal Ordering Questions

no code implementations EMNLP 2020 Qiang Ning, Hao Wu, Rujun Han, Nanyun Peng, Matt Gardner, Dan Roth

A critical part of reading is being able to understand the temporal relationships between events described in a passage of text, even when those relationships are not explicitly stated.

Machine Reading Comprehension Question Answering

An Improved Neural Baseline for Temporal Relation Extraction

no code implementations IJCNLP 2019 Qiang Ning, Sanjay Subramanian, Dan Roth

Determining temporal relations (e. g., before or after) between events has been a challenging natural language understanding task, partly due to the difficulty to generate large amounts of high-quality training data.

Common Sense Reasoning Natural Language Understanding +3

CogCompTime: A Tool for Understanding Time in Natural Language Text

no code implementations12 Jun 2019 Qiang Ning, Ben Zhou, Zhili Feng, Haoruo Peng, Dan Roth

Automatic extraction of temporal information in text is an important component of natural language understanding.

Natural Language Understanding

Partial Or Complete, That's The Question

no code implementations NAACL 2019 Qiang Ning, Hangfeng He, Chuchu Fan, Dan Roth

For many structured learning tasks, the data annotation process is complex and costly.

Joint Reasoning for Temporal and Causal Relations

no code implementations ACL 2018 Qiang Ning, Zhili Feng, Hao Wu, Dan Roth

Understanding temporal and causal relations between events is a fundamental natural language understanding task.

Natural Language Understanding

Exploiting Partially Annotated Data in Temporal Relation Extraction

no code implementations SEMEVAL 2018 Qiang Ning, Zhongzhi Yu, Chuchu Fan, Dan Roth

As a result, only a small number of documents are typically annotated, limiting the coverage of various lexical/semantic phenomena.

Relation Temporal Relation Extraction

A Multi-Axis Annotation Scheme for Event Temporal Relations

no code implementations ACL 2018 Qiang Ning, Hao Wu, Dan Roth

Existing temporal relation (TempRel) annotation schemes often have low inter-annotator agreements (IAA) even between experts, suggesting that the current annotation task needs a better definition.

Exploiting Partially Annotated Data for Temporal Relation Extraction

no code implementations18 Apr 2018 Qiang Ning, Zhongzhi Yu, Chuchu Fan, Dan Roth

As a result, only a small number of documents are typically annotated, limiting the coverage of various lexical/semantic phenomena.

Relation Temporal Relation Extraction

Improving Temporal Relation Extraction with a Globally Acquired Statistical Resource

no code implementations NAACL 2018 Qiang Ning, Hao Wu, Haoruo Peng, Dan Roth

We argue that this task would gain from the availability of a resource that provides prior knowledge in the form of the temporal order that events usually follow.

Relation Temporal Relation Extraction

Cannot find the paper you are looking for? You can Submit a new open access paper.