no code implementations • EMNLP 2021 • Rujun Han, I-Hung Hsu, Jiao Sun, Julia Baylon, Qiang Ning, Dan Roth, Nanyun Peng
While these tasks partially evaluate machines’ ability of narrative understanding, human-like reading comprehension requires the capability to process event-based information beyond arguments and temporal reasoning.
1 code implementation • 12 May 2023 • Sarik Ghazarian, Yijia Shao, Rujun Han, Aram Galstyan, Nanyun Peng
We take the first step by focusing on event commonsense that considers events and their relations, and is crucial in both dialogues and general commonsense reasoning.
2 code implementations • 16 Oct 2022 • Hong Chen, Rujun Han, Te-Lin Wu, Hideki Nakayama, Nanyun Peng
This task requires machines to 1) understand long text inputs and 2) produce a globally consistent image sequence that illustrates the contents of the story.
1 code implementation • NAACL 2022 • Rujun Han, Hong Chen, Yufei Tian, Nanyun Peng
Stories or narratives are comprised of a sequence of events.
1 code implementation • 16 Apr 2021 • Rujun Han, I-Hung Hsu, Jiao Sun, Julia Baylon, Qiang Ning, Dan Roth, Nanyun Peng
While these tasks partially evaluate machines' ability of narrative understanding, human-like reading comprehension requires the capability to process event-based information beyond arguments and temporal reasoning.
no code implementations • EACL 2021 • Rujun Han, Luca Soldaini, Alessandro Moschitti
In this work, we present an approach to efficiently incorporate contextual information in AS2 models.
Machine Reading Comprehension Open-Domain Question Answering +1
1 code implementation • NAACL 2021 • Mingyu Derek Ma, Jiao Sun, Mu Yang, Kung-Hsiang Huang, Nuan Wen, Shikhar Singh, Rujun Han, Nanyun Peng
We present EventPlus, a temporal event understanding pipeline that integrates various state-of-the-art event understanding components including event trigger and type detection, event argument detection, event duration and temporal relation extraction.
2 code implementations • EMNLP 2021 • Rujun Han, Xiang Ren, Nanyun Peng
While pre-trained language models (PTLMs) have achieved noticeable success on many NLP tasks, they still struggle for tasks that require event temporal reasoning, which is essential for event-centric applications.
Ranked #1 on Question Answering on Torque
2 code implementations • 16 Dec 2020 • Yichao Zhou, Yu Yan, Rujun Han, J. Harry Caufield, Kai-Wei Chang, Yizhou Sun, Peipei Ping, Wei Wang
There has been a steady need in the medical community to precisely extract the temporal relations between clinical events.
1 code implementation • EMNLP 2020 • Rujun Han, Yichao Zhou, Nanyun Peng
Extracting event temporal relations is a critical task for information extraction and plays an important role in natural language understanding.
no code implementations • EMNLP 2020 • Qiang Ning, Hao Wu, Rujun Han, Nanyun Peng, Matt Gardner, Dan Roth
A critical part of reading is being able to understand the temporal relationships between events described in a passage of text, even when those relationships are not explicitly stated.
Ranked #2 on Question Answering on Torque
1 code implementation • CONLL 2019 • Rujun Han, I-Hung Hsu, Mu Yang, Aram Galstyan, Ralph Weischedel, Nanyun Peng
We propose a novel deep structured learning framework for event temporal relation extraction.
no code implementations • IJCNLP 2019 • Rujun Han, Qiang Ning, Nanyun Peng
We propose a joint event and temporal relation extraction model with shared representation learning and structured prediction.
Event Extraction Joint Event and Temporal Relation Extraction +4
no code implementations • 26 Apr 2019 • Rujun Han, Mengyue Liang, Bashar Alhafni, Nanyun Peng
In this work, we establish strong baselines for event temporal relation extraction on two under-explored story narrative datasets: Richer Event Description (RED) and Causal and Temporal Relation Scheme (CaTeRS).
no code implementations • EMNLP 2018 • Rujun Han, Michael Gill, Arthur Spirling, Kyunghyun Cho
Conventional word embedding models do not leverage information from document meta-data, and they do not model uncertainty.