no code implementations • EMNLP 2021 • Rujun Han, I-Hung Hsu, Jiao Sun, Julia Baylon, Qiang Ning, Dan Roth, Nanyun Peng
While these tasks partially evaluate machines’ ability of narrative understanding, human-like reading comprehension requires the capability to process event-based information beyond arguments and temporal reasoning.
no code implementations • ACL 2022 • Qiang Ning, Ben Zhou, Hao Wu, Haoruo Peng, Chuchu Fan, Matt Gardner
News events are often associated with quantities (e. g., the number of COVID-19 patients or the number of arrests in a protest), and it is often important to extract their type, time, and location from unstructured text in order to analyze these quantity events.
no code implementations • 22 Oct 2024 • Yang Zhenyuan, Liu Zhengliang, Zhang Jing, Lu Cen, Tai Jiaxin, Zhong Tianyang, Li Yiwei, Zhao Siyan, Yao Teng, Liu Qing, Yang Jinlin, Liu Qixin, Li Zhaowei, Wang Kexin, Ma Longjun, Zhu Dajiang, Ren Yudan, Ge Bao, Zhang Wei, Qiang Ning, Zhang Tuo, Liu Tianming
This study examines the capabilities of advanced Large Language Models (LLMs), particularly the o1 model, in the context of literary analysis.
no code implementations • 16 Oct 2024 • Siyi Liu, Qiang Ning, Kishaloy Halder, Wei Xiao, Zheng Qi, Phu Mon Htut, Yi Zhang, Neha Anna John, Bonan Min, Yassine Benajiba, Dan Roth
Open domain question answering systems frequently rely on information retrieved from large collections of text (such as the Web) to answer questions.
no code implementations • 10 Mar 2024 • Fei Wang, Chao Shang, Sarthak Jain, Shuai Wang, Qiang Ning, Bonan Min, Vittorio Castelli, Yassine Benajiba, Dan Roth
We investigate common constraints in NLP tasks, categorize them into three classes based on the types of their arguments, and propose a unified framework, ACT (Aligning to ConsTraints), to automatically produce supervision signals for user alignment with constraints.
1 code implementation • 16 Jun 2022 • Ehsan Qasemi, Piyush Khanna, Qiang Ning, Muhao Chen
Reasoning with preconditions such as "glass can be used for drinking water unless the glass is shattered" remains an open problem for language models.
1 code implementation • NAACL 2022 • Wenxuan Zhou, Qiang Ning, Heba Elfardy, Kevin Small, Muhao Chen
Current question answering (QA) systems primarily consider the single-answer scenario, where each question is assumed to be paired with one correct answer.
no code implementations • ACL 2021 • Muhao Chen, Hongming Zhang, Qiang Ning, Manling Li, Heng Ji, Kathleen McKeown, Dan Roth
This tutorial targets researchers and practitioners who are interested in AI technologies that help machines understand natural language text, particularly real-world events described in the text.
1 code implementation • NAACL 2021 • Haoyang Wen, Yanru Qu, Heng Ji, Qiang Ning, Jiawei Han, Avi Sil, Hanghang Tong, Dan Roth
Grounding events into a precise timeline is important for natural language understanding but has received limited attention in recent work.
2 code implementations • NAACL 2021 • Roshanak Mirzaee, Hossein Rajaby Faghihi, Qiang Ning, Parisa Kordjamshidi
This paper proposes a question-answering (QA) benchmark for spatial reasoning on natural language text which contains more realistic spatial phenomena not covered by prior work and is challenging for state-of-the-art language models (LM).
1 code implementation • Findings (NAACL) 2022 • Shuaicheng Zhang, Lifu Huang, Qiang Ning
Extracting temporal relations (e. g., before, after, and simultaneous) among events is crucial to natural language understanding.
1 code implementation • 16 Apr 2021 • Rujun Han, I-Hung Hsu, Jiao Sun, Julia Baylon, Qiang Ning, Dan Roth, Nanyun Peng
While these tasks partially evaluate machines' ability of narrative understanding, human-like reading comprehension requires the capability to process event-based information beyond arguments and temporal reasoning.
1 code implementation • 12 Apr 2021 • Roshanak Mirzaee, Hossein Rajaby Faghihi, Qiang Ning, Parisa Kordjmashidi
This paper proposes a question-answering (QA) benchmark for spatial reasoning on natural language text which contains more realistic spatial phenomena not covered by prior work and is challenging for state-of-the-art language models (LM).
no code implementations • 1 Jan 2021 • Hangfeng He, Mingyuan Zhang, Qiang Ning, Dan Roth
Real-world applications often require making use of {\em a range of incidental supervision signals}.
no code implementations • NAACL 2021 • Ben Zhou, Kyle Richardson, Qiang Ning, Tushar Khot, Ashish Sabharwal, Dan Roth
We propose TRACIE, a novel temporal reasoning dataset that evaluates the degree to which systems understand implicit events -- events that are not mentioned explicitly in natural language text but can be inferred from it.
no code implementations • EMNLP 2020 • Qiang Ning, Hao Wu, Pradeep Dasigi, Dheeru Dua, Matt Gardner, Robert L. Logan IV, Ana Marasovi{\'c}, Zhen Nie
High-quality and large-scale data are key to success for AI systems.
no code implementations • 1 Oct 2020 • Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, A. Zhang, Ben Zhou
Unfortunately, when a dataset has systematic gaps (e. g., annotation artifacts), these evaluations are misleading: a model can learn simple decision rules that perform well on the test set but do not capture a dataset's intended capabilities.
no code implementations • NeurIPS 2020 • Kaifu Wang, Qiang Ning, Dan Roth
Learning from indirect supervision signals is important in real-world AI applications when, often, gold labels are missing or too costly.
2 code implementations • EMNLP 2021 • Hangfeng He, Mingyuan Zhang, Qiang Ning, Dan Roth
Real-world applications often require improved models by leveraging a range of cheap incidental supervision signals.
no code implementations • ACL 2020 • Ben Zhou, Qiang Ning, Daniel Khashabi, Dan Roth
Temporal common sense (e. g., duration and frequency of events) is crucial for understanding natural language.
no code implementations • EMNLP 2020 • Qiang Ning, Hao Wu, Rujun Han, Nanyun Peng, Matt Gardner, Dan Roth
A critical part of reading is being able to understand the temporal relationships between events described in a passage of text, even when those relationships are not explicitly stated.
Ranked #2 on Question Answering on Torque
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, Ben Zhou
Unfortunately, when a dataset has systematic gaps (e. g., annotation artifacts), these evaluations are misleading: a model can learn simple decision rules that perform well on the test set but do not capture a dataset's intended capabilities.
no code implementations • CONLL 2019 • Haoruo Peng, Qiang Ning, Dan Roth
Story understanding requires developing expectations of what events come next in text.
no code implementations • IJCNLP 2019 • Ben Zhou, Daniel Khashabi, Qiang Ning, Dan Roth
Understanding time is crucial for understanding events expressed in natural language.
1 code implementation • 6 Sep 2019 • Ben Zhou, Daniel Khashabi, Qiang Ning, Dan Roth
Understanding time is crucial for understanding events expressed in natural language.
no code implementations • IJCNLP 2019 • Rujun Han, Qiang Ning, Nanyun Peng
We propose a joint event and temporal relation extraction model with shared representation learning and structured prediction.
Event Extraction Joint Event and Temporal Relation Extraction +4
no code implementations • IJCNLP 2019 • Qiang Ning, Sanjay Subramanian, Dan Roth
Determining temporal relations (e. g., before or after) between events has been a challenging natural language understanding task, partly due to the difficulty to generate large amounts of high-quality training data.
1 code implementation • ACL 2020 • Hangfeng He, Qiang Ning, Dan Roth
Question-answering (QA) data often encodes essential information in many facets.
no code implementations • NAACL 2019 • Qiang Ning, Hangfeng He, Chuchu Fan, Dan Roth
For many structured learning tasks, the data annotation process is complex and costly.
no code implementations • 12 Jun 2019 • Qiang Ning, Ben Zhou, Zhili Feng, Haoruo Peng, Dan Roth
Automatic extraction of temporal information in text is an important component of natural language understanding.
no code implementations • ACL 2018 • Qiang Ning, Zhili Feng, Hao Wu, Dan Roth
Understanding temporal and causal relations between events is a fundamental natural language understanding task.
no code implementations • EMNLP 2017 • Qiang Ning, Zhili Feng, Dan Roth
Identifying temporal relations between events is an essential step towards natural language understanding.
Ranked #1 on Temporal Information Extraction on TempEval-3
no code implementations • EMNLP 2018 • Qiang Ning, Ben Zhou, Zhili Feng, Haoruo Peng, Dan Roth
Automatic extraction of temporal information is important for natural language understanding.
no code implementations • SEMEVAL 2018 • Qiang Ning, Zhongzhi Yu, Chuchu Fan, Dan Roth
As a result, only a small number of documents are typically annotated, limiting the coverage of various lexical/semantic phenomena.
1 code implementation • LREC 2018 • Daniel Khashabi, Mark Sammons, Ben Zhou, Tom Redman, Christos Christodoulopoulos, Vivek Srikumar, Nicholas Rizzolo, Lev Ratinov, Guanheng Luo, Quang Do, Chen-Tse Tsai, Subhro Roy, Stephen Mayhew, Zhili Feng, John Wieting, Xiaodong Yu, Yangqiu Song, Shashank Gupta, Shyam Upadhyay, Naveen Arivazhagan, Qiang Ning, Shaoshi Ling, Dan Roth
no code implementations • ACL 2018 • Qiang Ning, Hao Wu, Dan Roth
Existing temporal relation (TempRel) annotation schemes often have low inter-annotator agreements (IAA) even between experts, suggesting that the current annotation task needs a better definition.
no code implementations • 18 Apr 2018 • Qiang Ning, Zhongzhi Yu, Chuchu Fan, Dan Roth
As a result, only a small number of documents are typically annotated, limiting the coverage of various lexical/semantic phenomena.
no code implementations • NAACL 2018 • Qiang Ning, Hao Wu, Haoruo Peng, Dan Roth
We argue that this task would gain from the availability of a resource that provides prior knowledge in the form of the temporal order that events usually follow.