Search Results for author: Yunmo Chen

Found 20 papers, 12 papers with code

Reading the Manual: Event Extraction as Definition Comprehension

no code implementations EMNLP (spnlp) 2020 Yunmo Chen, Tongfei Chen, Seth Ebner, Aaron Steven White, Benjamin Van Durme

We ask whether text understanding has progressed to where we may extract event information through incremental refinement of bleached statements derived from annotation manuals.

Event Extraction

Hierarchical Entity Typing via Multi-level Learning to Rank

1 code implementation ACL 2020 Tongfei Chen, Yunmo Chen, Benjamin Van Durme

We propose a novel method for hierarchical entity classification that embraces ontological structure at both training and during prediction.

Entity Typing Learning-To-Rank

Joint Modeling of Arguments for Event Understanding

1 code implementation20 Nov 2020 Yunmo Chen, Tongfei Chen, Benjamin Van Durme

We recognize the task of event argument linking in documents as similar to that of intent slot resolution in dialogue, providing a Transformer-based model that extends from a recently proposed solution to resolve references to slots.

Sentence

Pattern-aware Data Augmentation for Query Rewriting in Voice Assistant Systems

no code implementations21 Dec 2020 Yunmo Chen, Sixing Lu, Fan Yang, Xiaojiang Huang, Xing Fan, Chenlei Guo

Query rewriting (QR) systems are widely used to reduce the friction caused by errors in a spoken language understanding pipeline.

Data Augmentation Friction +1

Asking the Right Questions in Low Resource Template Extraction

no code implementations25 May 2022 Nils Holzenberger, Yunmo Chen, Benjamin Van Durme

Information Extraction (IE) researchers are mapping tasks to Question Answering (QA) in order to leverage existing large QA resources, and thereby improve data efficiency.

Question Answering

Iterative Document-level Information Extraction via Imitation Learning

2 code implementations12 Oct 2022 Yunmo Chen, William Gantt, Weiwei Gu, Tongfei Chen, Aaron Steven White, Benjamin Van Durme

We present a novel iterative extraction model, IterX, for extracting complex relations, or templates (i. e., N-tuples representing a mapping from named slots to spans of text) within a document.

4-ary Relation Extraction Imitation Learning

An Empirical Study on Finding Spans

no code implementations13 Oct 2022 Weiwei Gu, Boyuan Zheng, Yunmo Chen, Tongfei Chen, Benjamin Van Durme

We present an empirical study on methods for span finding, the selection of consecutive tokens in text for some downstream tasks.

On Event Individuation for Document-Level Information Extraction

1 code implementation19 Dec 2022 William Gantt, Reno Kriz, Yunmo Chen, Siddharth Vashishtha, Aaron Steven White

As information extraction (IE) systems have grown more adept at processing whole documents, the classic task of template filling has seen renewed interest as benchmark for document-level IE.

Position

When Do Decompositions Help for Machine Reading?

no code implementations20 Dec 2022 Kangda Wei, Dawn Lawrie, Benjamin Van Durme, Yunmo Chen, Orion Weller

Answering complex questions often requires multi-step reasoning in order to obtain the final answer.

Reading Comprehension Retrieval

Differentiable Tree Operations Promote Compositional Generalization

1 code implementation1 Jun 2023 Paul Soulos, Edward Hu, Kate McCurdy, Yunmo Chen, Roland Fernandez, Paul Smolensky, Jianfeng Gao

To facilitate the learning of these symbolic sequences, we introduce a differentiable tree interpreter that compiles high-level symbolic tree operations into subsymbolic matrix operations on tensors.

Semantic Parsing Text Generation

A Unified View of Evaluation Metrics for Structured Prediction

1 code implementation20 Oct 2023 Yunmo Chen, William Gantt, Tongfei Chen, Aaron Steven White, Benjamin Van Durme

We present a conceptual framework that unifies a variety of evaluation metrics for different structured prediction tasks (e. g. event and relation extraction, syntactic and semantic parsing).

Relation Extraction Semantic Parsing +1

FAITHSCORE: Evaluating Hallucinations in Large Vision-Language Models

1 code implementation2 Nov 2023 Liqiang Jing, Ruosen Li, Yunmo Chen, Mengzhao Jia, Xinya Du

We introduce FAITHSCORE (Faithfulness to Atomic Image Facts Score), a reference-free and fine-grained evaluation metric that measures the faithfulness of the generated free-form answers from large vision-language models (LVLMs).

Descriptive Instruction Following

Narrowing the Gap between Zero- and Few-shot Machine Translation by Matching Styles

no code implementations4 Nov 2023 Weiting Tan, Haoran Xu, Lingfeng Shen, Shuyue Stella Li, Kenton Murray, Philipp Koehn, Benjamin Van Durme, Yunmo Chen

Large language models trained primarily in a monolingual setting have demonstrated their ability to generalize to machine translation using zero- and few-shot examples with in-context learning.

In-Context Learning Machine Translation +1

Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation

1 code implementation16 Jan 2024 Haoran Xu, Amr Sharaf, Yunmo Chen, Weiting Tan, Lingfeng Shen, Benjamin Van Durme, Kenton Murray, Young Jin Kim

However, even the top-performing 13B LLM-based translation models, like ALMA, does not match the performance of state-of-the-art conventional encoder-decoder translation models or larger-scale LLMs such as GPT-4.

Machine Translation Translation

The Language Barrier: Dissecting Safety Challenges of LLMs in Multilingual Contexts

no code implementations23 Jan 2024 Lingfeng Shen, Weiting Tan, Sihao Chen, Yunmo Chen, Jingyu Zhang, Haoran Xu, Boyuan Zheng, Philipp Koehn, Daniel Khashabi

As the influence of large language models (LLMs) spans across global communities, their safety challenges in multilingual settings become paramount for alignment research.

MultiMUC: Multilingual Template Filling on MUC-4

1 code implementation29 Jan 2024 William Gantt, Shabnam Behzad, Hannah Youngeun An, Yunmo Chen, Aaron Steven White, Benjamin Van Durme, Mahsa Yarmohammadi

We introduce MultiMUC, the first multilingual parallel corpus for template filling, comprising translations of the classic MUC-4 template filling benchmark into five languages: Arabic, Chinese, Farsi, Korean, and Russian.

Machine Translation Translation

Streaming Sequence Transduction through Dynamic Compression

1 code implementation2 Feb 2024 Weiting Tan, Yunmo Chen, Tongfei Chen, Guanghui Qin, Haoran Xu, Heidi C. Zhang, Benjamin Van Durme, Philipp Koehn

We introduce STAR (Stream Transduction with Anchor Representations), a novel Transformer-based model designed for efficient sequence-to-sequence transduction over streams.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Cannot find the paper you are looking for? You can Submit a new open access paper.