1 code implementation • CODI 2021 • Zae Myung Kim, Vassilina Nikoulina, Dongyeop Kang, Didier Schwab, Laurent Besacier
This paper presents an interactive data dashboard that provides users with an overview of the preservation of discourse relations among 28 language pairs.
no code implementations • WMT (EMNLP) 2020 • Yujin Baek, Zae Myung Kim, Jihyung Moon, Hyunjoong Kim, Eunjeong Park
This paper describes the system submitted by Papago team for the quality estimation task at WMT 2020.
no code implementations • 2 Oct 2024 • Minoh Jeong, Min Namgung, Zae Myung Kim, Dongyeop Kang, Yao-Yi Chiang, Alfred Hero
We theoretically demonstrate that our method captures three crucial properties of multimodal learning: intra-modal learning, inter-modal learning, and multimodal alignment, while also constructing a robust unified representation across all modalities.
1 code implementation • 26 Jun 2024 • Minhwa Lee, Zae Myung Kim, Vivek Khetan, Dongyeop Kang
Large Language Models (LLMs) have assisted humans in several writing tasks, including text revision and story generation.
1 code implementation • 16 Feb 2024 • Zae Myung Kim, Kwang Hee Lee, Preston Zhu, Vipul Raheja, Dongyeop Kang
With the advent of large language models (LLM), the line between human-crafted and machine-generated texts has become increasingly blurred.
no code implementations • 26 Jan 2024 • Debarati Das, Karin de Langis, Anna Martin-Boyle, Jaehyung Kim, Minhwa Lee, Zae Myung Kim, Shirley Anugrah Hayati, Risako Owan, Bin Hu, Ritik Parkar, Ryan Koo, Jonginn Park, Aahan Tyagi, Libby Ferland, Sanjali Roy, Vincent Liu, Dongyeop Kang
This work delves into the expanding role of large language models (LLMs) in generating artificial data.
1 code implementation • 29 Sep 2023 • Ryan Koo, Minhwa Lee, Vipul Raheja, Jong Inn Park, Zae Myung Kim, Dongyeop Kang
We then evaluate the quality of ranking outputs introducing the Cognitive Bias Benchmark for LLMs as Evaluators (CoBBLEr), a benchmark to measure six different cognitive biases in LLM evaluation outputs, such as the Egocentric bias where a model prefers to rank its own outputs highly in evaluation.
no code implementations • 6 Jun 2023 • Rose Neis, Karin de Langis, Zae Myung Kim, Dongyeop Kang
Capturing readers' engagement in fiction is a challenging but important aspect of narrative understanding.
no code implementations • 24 May 2023 • Hao Zou, Zae Myung Kim, Dongyeop Kang
In NLP, diffusion models have been used in a variety of applications, such as natural language generation, sentiment analysis, topic modeling, and machine translation.
no code implementations • 23 May 2023 • Zae Myung Kim, David E. Taylor, Dongyeop Kang
Conversational implicatures are pragmatic inferences that require listeners to deduce the intended meaning conveyed by a speaker from their explicit utterances.
1 code implementation • 2 Dec 2022 • Zae Myung Kim, Wanyu Du, Vipul Raheja, Dhruv Kumar, Dongyeop Kang
Leveraging datasets from other related text editing NLP tasks, combined with the specification of editable spans, leads our system to more accurately model the process of iterative text refinement, as evidenced by empirical results and human evaluations.
1 code implementation • In2Writing (ACL) 2022 • Wanyu Du, Zae Myung Kim, Vipul Raheja, Dhruv Kumar, Dongyeop Kang
Examining and evaluating the capability of large language models for making continuous revisions and collaborating with human writers is a critical step towards building effective writing assistants.
1 code implementation • ACL 2022 • Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang
Writing is, by nature, a strategic, adaptive, and more importantly, an iterative process.
no code implementations • Findings (ACL) 2021 • Zae Myung Kim, Laurent Besacier, Vassilina Nikoulina, Didier Schwab
Recent studies on the analysis of the multilingual representations focus on identifying whether there is an emergence of language-independent representations, or whether a multilingual model partitions its weights among different languages.
1 code implementation • EMNLP (NLP-COVID19) 2020 • Alexandre Bérard, Zae Myung Kim, Vassilina Nikoulina, Eunjeong L. Park, Matthias Gallé
We release a multilingual neural machine translation model, which can be used to translate text in the biomedical domain.