Search Results for author: Zae Myung Kim

Found 16 papers, 8 papers with code

Visualizing Cross‐Lingual Discourse Relations in Multilingual TED Corpora

1 code implementation CODI 2021 Zae Myung Kim, Vassilina Nikoulina, Dongyeop Kang, Didier Schwab, Laurent Besacier

This paper presents an interactive data dashboard that provides users with an overview of the preservation of discourse relations among 28 language pairs.

Relation

Anchors Aweigh! Sail for Optimal Unified Multi-Modal Representations

no code implementations2 Oct 2024 Minoh Jeong, Min Namgung, Zae Myung Kim, Dongyeop Kang, Yao-Yi Chiang, Alfred Hero

We theoretically demonstrate that our method captures three crucial properties of multimodal learning: intra-modal learning, inter-modal learning, and multimodal alignment, while also constructing a robust unified representation across all modalities.

Human-AI Collaborative Taxonomy Construction: A Case Study in Profession-Specific Writing Assistants

1 code implementation26 Jun 2024 Minhwa Lee, Zae Myung Kim, Vivek Khetan, Dongyeop Kang

Large Language Models (LLMs) have assisted humans in several writing tasks, including text revision and story generation.

Story Generation

Threads of Subtlety: Detecting Machine-Generated Texts Through Discourse Motifs

1 code implementation16 Feb 2024 Zae Myung Kim, Kwang Hee Lee, Preston Zhu, Vipul Raheja, Dongyeop Kang

With the advent of large language models (LLM), the line between human-crafted and machine-generated texts has become increasingly blurred.

Benchmarking Cognitive Biases in Large Language Models as Evaluators

1 code implementation29 Sep 2023 Ryan Koo, Minhwa Lee, Vipul Raheja, Jong Inn Park, Zae Myung Kim, Dongyeop Kang

We then evaluate the quality of ranking outputs introducing the Cognitive Bias Benchmark for LLMs as Evaluators (CoBBLEr), a benchmark to measure six different cognitive biases in LLM evaluation outputs, such as the Egocentric bias where a model prefers to rank its own outputs highly in evaluation.

Benchmarking In-Context Learning

An Analysis of Reader Engagement in Literary Fiction through Eye Tracking and Linguistic Features

no code implementations6 Jun 2023 Rose Neis, Karin de Langis, Zae Myung Kim, Dongyeop Kang

Capturing readers' engagement in fiction is a challenging but important aspect of narrative understanding.

Sentence

A Survey of Diffusion Models in Natural Language Processing

no code implementations24 May 2023 Hao Zou, Zae Myung Kim, Dongyeop Kang

In NLP, diffusion models have been used in a variety of applications, such as natural language generation, sentiment analysis, topic modeling, and machine translation.

Few-Shot Learning Machine Translation +3

"Is the Pope Catholic?" Applying Chain-of-Thought Reasoning to Understanding Conversational Implicatures

no code implementations23 May 2023 Zae Myung Kim, David E. Taylor, Dongyeop Kang

Conversational implicatures are pragmatic inferences that require listeners to deduce the intended meaning conveyed by a speaker from their explicit utterances.

Implicatures

Improving Iterative Text Revision by Learning Where to Edit from Other Revision Tasks

1 code implementation2 Dec 2022 Zae Myung Kim, Wanyu Du, Vipul Raheja, Dhruv Kumar, Dongyeop Kang

Leveraging datasets from other related text editing NLP tasks, combined with the specification of editable spans, leads our system to more accurately model the process of iterative text refinement, as evidenced by empirical results and human evaluations.

Grammatical Error Correction Sentence +3

Read, Revise, Repeat: A System Demonstration for Human-in-the-loop Iterative Text Revision

1 code implementation In2Writing (ACL) 2022 Wanyu Du, Zae Myung Kim, Vipul Raheja, Dhruv Kumar, Dongyeop Kang

Examining and evaluating the capability of large language models for making continuous revisions and collaborating with human writers is a critical step towards building effective writing assistants.

Do Multilingual Neural Machine Translation Models Contain Language Pair Specific Attention Heads?

no code implementations Findings (ACL) 2021 Zae Myung Kim, Laurent Besacier, Vassilina Nikoulina, Didier Schwab

Recent studies on the analysis of the multilingual representations focus on identifying whether there is an emergence of language-independent representations, or whether a multilingual model partitions its weights among different languages.

Decoder Machine Translation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.