Search Results for author: Rujun Han

Found 20 papers, 11 papers with code

ESTER: A Machine Reading Comprehension Dataset for Reasoning about Event Semantic Relations

no code implementations EMNLP 2021 Rujun Han, I-Hung Hsu, Jiao Sun, Julia Baylon, Qiang Ning, Dan Roth, Nanyun Peng

While these tasks partially evaluate machines’ ability of narrative understanding, human-like reading comprehension requires the capability to process event-based information beyond arguments and temporal reasoning.

Machine Reading Comprehension Natural Language Queries +1

In Prospect and Retrospect: Reflective Memory Management for Long-term Personalized Dialogue Agents

no code implementations11 Mar 2025 Zhen Tan, Jun Yan, I-Hung Hsu, Rujun Han, Zifeng Wang, Long T. Le, Yiwen Song, Yanfei Chen, Hamid Palangi, George Lee, Anand Iyer, Tianlong Chen, Huan Liu, Chen-Yu Lee, Tomas Pfister

Large Language Models (LLMs) have made significant progress in open-ended dialogue, yet their inability to retain and retrieve relevant information from long-term interactions limits their effectiveness in applications requiring sustained personalization.

Management Reinforcement Learning (RL) +1

Speculative Knowledge Distillation: Bridging the Teacher-Student Gap Through Interleaved Sampling

no code implementations15 Oct 2024 Wenda Xu, Rujun Han, Zifeng Wang, Long T. Le, Dhruv Madeka, Lei LI, William Yang Wang, Rishabh Agarwal, Chen-Yu Lee, Tomas Pfister

To address these limitations, we introduce Speculative Knowledge Distillation (SKD), a novel approach that leverages cooperation between student and teacher models to generate high-quality training data on-the-fly while aligning with the student's inference-time distribution.

Instruction Following Knowledge Distillation +2

Dancing in Chains: Reconciling Instruction Following and Faithfulness in Language Models

1 code implementation31 Jul 2024 Zhengxuan Wu, Yuhao Zhang, Peng Qi, Yumo Xu, Rujun Han, Yian Zhang, Jifan Chen, Bonan Min, Zhiheng Huang

Surprisingly, we find that less is more, as training ReSet with high-quality, yet substantially smaller data (three-fold less) yields superior results.

Instruction Following Multi-Task Learning

RAG-QA Arena: Evaluating Domain Robustness for Long-form Retrieval Augmented Question Answering

1 code implementation19 Jul 2024 Rujun Han, Yuhao Zhang, Peng Qi, Yumo Xu, Jenyuan Wang, Lan Liu, William Yang Wang, Bonan Min, Vittorio Castelli

Question answering based on retrieval augmented generation (RAG-QA) is an important research topic in NLP and has a wide range of real-world applications.

Domain Generalization Form +5

ACCENT: An Automatic Event Commonsense Evaluation Metric for Open-Domain Dialogue Systems

1 code implementation12 May 2023 Sarik Ghazarian, Yijia Shao, Rujun Han, Aram Galstyan, Nanyun Peng

We take the first step by focusing on event commonsense that considers events and their relations, and is crucial in both dialogues and general commonsense reasoning.

Character-Centric Story Visualization via Visual Planning and Token Alignment

2 code implementations16 Oct 2022 Hong Chen, Rujun Han, Te-Lin Wu, Hideki Nakayama, Nanyun Peng

This task requires machines to 1) understand long text inputs and 2) produce a globally consistent image sequence that illustrates the contents of the story.

Story Visualization Text-to-Image Generation

ESTER: A Machine Reading Comprehension Dataset for Event Semantic Relation Reasoning

1 code implementation16 Apr 2021 Rujun Han, I-Hung Hsu, Jiao Sun, Julia Baylon, Qiang Ning, Dan Roth, Nanyun Peng

While these tasks partially evaluate machines' ability of narrative understanding, human-like reading comprehension requires the capability to process event-based information beyond arguments and temporal reasoning.

Machine Reading Comprehension Natural Language Queries +2

EventPlus: A Temporal Event Understanding Pipeline

1 code implementation NAACL 2021 Mingyu Derek Ma, Jiao Sun, Mu Yang, Kung-Hsiang Huang, Nuan Wen, Shikhar Singh, Rujun Han, Nanyun Peng

We present EventPlus, a temporal event understanding pipeline that integrates various state-of-the-art event understanding components including event trigger and type detection, event argument detection, event duration and temporal relation extraction.

Common Sense Reasoning Event Extraction +1

ECONET: Effective Continual Pretraining of Language Models for Event Temporal Reasoning

2 code implementations EMNLP 2021 Rujun Han, Xiang Ren, Nanyun Peng

While pre-trained language models (PTLMs) have achieved noticeable success on many NLP tasks, they still struggle for tasks that require event temporal reasoning, which is essential for event-centric applications.

Continual Pretraining Language Modelling +4

Domain Knowledge Empowered Structured Neural Net for End-to-End Event Temporal Relation Extraction

1 code implementation EMNLP 2020 Rujun Han, Yichao Zhou, Nanyun Peng

Extracting event temporal relations is a critical task for information extraction and plays an important role in natural language understanding.

Natural Language Understanding Relation +1

TORQUE: A Reading Comprehension Dataset of Temporal Ordering Questions

no code implementations EMNLP 2020 Qiang Ning, Hao Wu, Rujun Han, Nanyun Peng, Matt Gardner, Dan Roth

A critical part of reading is being able to understand the temporal relationships between events described in a passage of text, even when those relationships are not explicitly stated.

Machine Reading Comprehension Question Answering

Contextualized Word Embeddings Enhanced Event Temporal Relation Extraction for Story Understanding

no code implementations26 Apr 2019 Rujun Han, Mengyue Liang, Bashar Alhafni, Nanyun Peng

In this work, we establish strong baselines for event temporal relation extraction on two under-explored story narrative datasets: Richer Event Description (RED) and Causal and Temporal Relation Scheme (CaTeRS).

Relation Temporal Relation Extraction +1

Cannot find the paper you are looking for? You can Submit a new open access paper.