Search Results for author: Dongyeop Kang

Found 64 papers, 40 papers with code

Visualizing Cross‐Lingual Discourse Relations in Multilingual TED Corpora

1 code implementation CODI 2021 Zae Myung Kim, Vassilina Nikoulina, Dongyeop Kang, Didier Schwab, Laurent Besacier

This paper presents an interactive data dashboard that provides users with an overview of the preservation of discourse relations among 28 language pairs.

Relation

LearnerVoice: A Dataset of Non-Native English Learners' Spontaneous Speech

no code implementations5 Jul 2024 Haechan Kim, Junho Myung, Seoyoung Kim, Sungpah Lee, Dongyeop Kang, Juho Kim

Our linguistic analysis reveals that transcriptions in our dataset contain L2S (L2 learner's Spontaneous speech) features, consisting of ungrammatical expressions and disfluencies (e. g., filler words, word repetitions, self-repairs, false starts), significantly more than native speech datasets.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Human-AI Collaborative Taxonomy Construction: A Case Study in Profession-Specific Writing Assistants

1 code implementation26 Jun 2024 Minhwa Lee, Zae Myung Kim, Vivek Khetan, Dongyeop Kang

Large Language Models (LLMs) have assisted humans in several writing tasks, including text revision and story generation.

Story Generation

On the Sequence Evaluation based on Stochastic Processes

no code implementations28 May 2024 Tianhao Zhang, Zhexiao Lin, Zhecheng Sheng, Chen Jiang, Dongyeop Kang

We introduce a likelihood-based training objective for the text encoder and design a more thorough measurement (score) for long text evaluation compared to the previous approach.

Coherence Evaluation Machine Translation +1

Confidence Calibration and Rationalization for LLMs via Multi-Agent Deliberation

1 code implementation14 Apr 2024 Ruixin Yang, Dheeraj Rajagopal, Shirley Anugrah Hayati, Bin Hu, Dongyeop Kang

Uncertainty estimation is a significant issue for current large language models (LLMs) that are generally poorly calibrated and over-confident, especially with reinforcement learning from human feedback (RLHF).

Reinforcement Learning with Dynamic Multi-Reward Weighting for Multi-Style Controllable Generation

no code implementations21 Feb 2024 Karin de Langis, Ryan Koo, Dongyeop Kang

Style is an integral component of text that expresses a diverse set of information, including interpersonal dynamics (e. g. formality) and the author's emotions or attitudes (e. g. disgust).

Reinforcement Learning (RL)

Shallow Synthesis of Knowledge in GPT-Generated Texts: A Case Study in Automatic Related Work Composition

no code implementations19 Feb 2024 Anna Martin-Boyle, Aahan Tyagi, Marti A. Hearst, Dongyeop Kang

Numerous AI-assisted scholarly applications have been developed to aid different stages of the research process.

Chain-of-Instructions: Compositional Instruction Tuning on Large Language Models

no code implementations18 Feb 2024 Shirley Anugrah Hayati, Taehee Jung, Tristan Bodding-Long, Sudipta Kar, Abhinav Sethy, Joo-Kyung Kim, Dongyeop Kang

Fine-tuning large language models (LLMs) with a collection of large and diverse instructions has improved the model's generalization to different tasks, even for unseen tasks.

Threads of Subtlety: Detecting Machine-Generated Texts Through Discourse Motifs

1 code implementation16 Feb 2024 Zae Myung Kim, Kwang Hee Lee, Preston Zhu, Vipul Raheja, Dongyeop Kang

With the advent of large language models (LLM), the line between human-crafted and machine-generated texts has become increasingly blurred.

II-MMR: Identifying and Improving Multi-modal Multi-hop Reasoning in Visual Question Answering

1 code implementation16 Feb 2024 Jihyung Kil, Farideh Tavazoee, Dongyeop Kang, Joo-Kyung Kim

II-MMR then analyzes this path to identify different reasoning cases in current VQA benchmarks by estimating how many hops and what types (i. e., visual or beyond-visual) of reasoning are required to answer the question.

Question Answering Visual Question Answering

SelectLLM: Can LLMs Select Important Instructions to Annotate?

1 code implementation29 Jan 2024 Ritik Sachin Parkar, Jaehyung Kim, Jong Inn Park, Dongyeop Kang

However, how to select unlabelled instructions is not well-explored, especially in the context of LLMs.

Active Learning Instruction Following

BBScore: A Brownian Bridge Based Metric for Assessing Text Coherence

no code implementations28 Dec 2023 Zhecheng Sheng, Tianhao Zhang, Chen Jiang, Dongyeop Kang

In summary, we present a novel Brownian bridge coherence metric capable of measuring both local and global text coherence, while circumventing the need for end-to-end model training.

Coherence Evaluation

How Far Can We Extract Diverse Perspectives from Large Language Models?

1 code implementation16 Nov 2023 Shirley Anugrah Hayati, Minhwa Lee, Dheeraj Rajagopal, Dongyeop Kang

In this study, we investigate LLMs' capacity for generating diverse perspectives and rationales on subjective topics, such as social norms and argumentative texts.

Diversity Sentence +2

Which Modality should I use -- Text, Motif, or Image? : Understanding Graphs with Large Language Models

no code implementations16 Nov 2023 Debarati Das, Ishaan Gupta, Jaideep Srivastava, Dongyeop Kang

Our research integrates graph data with Large Language Models (LLMs), which, despite their advancements in various fields using large text corpora, face limitations in encoding entire graphs due to context size constraints.

Benchmarking Cognitive Biases in Large Language Models as Evaluators

1 code implementation29 Sep 2023 Ryan Koo, Minhwa Lee, Vipul Raheja, Jong Inn Park, Zae Myung Kim, Dongyeop Kang

According to our findings, LLMs may still be unable to be utilized for automatic annotation aligned with human preferences.

Benchmarking In-Context Learning

Story Visualization by Online Text Augmentation with Context Memory

1 code implementation ICCV 2023 Daechul Ahn, Daneul Kim, Gwangmo Song, Seung Hwan Kim, Honglak Lee, Dongyeop Kang, Jonghyun Choi

Story visualization (SV) is a challenging text-to-image generation task for the difficulty of not only rendering visual details from the text descriptions but also encoding a long-term context across multiple sentences.

Sentence Story Visualization +2

Prefer to Classify: Improving Text Classifiers via Auxiliary Preference Learning

1 code implementation8 Jun 2023 Jaehyung Kim, Jinwoo Shin, Dongyeop Kang

In this paper, we investigate task-specific preferences between pairs of input texts as a new alternative way for such auxiliary data annotation.

Multi-Task Learning

An Analysis of Reader Engagement in Literary Fiction through Eye Tracking and Linguistic Features

no code implementations6 Jun 2023 Rose Neis, Karin de Langis, Zae Myung Kim, Dongyeop Kang

Capturing readers' engagement in fiction is a challenging but important aspect of narrative understanding.

Sentence

Complex Mathematical Symbol Definition Structures: A Dataset and Model for Coordination Resolution in Definition Extraction

1 code implementation24 May 2023 Anna Martin-Boyle, Andrew Head, Kyle Lo, Risham Sidhu, Marti A. Hearst, Dongyeop Kang

We also introduce a new definition extraction method that masks mathematical symbols, creates a copy of each sentence for each symbol, specifies a target symbol, and predicts its corresponding definition spans using slot filling.

Definition Extraction Math +3

A Survey of Diffusion Models in Natural Language Processing

no code implementations24 May 2023 Hao Zou, Zae Myung Kim, Dongyeop Kang

In NLP, diffusion models have been used in a variety of applications, such as natural language generation, sentiment analysis, topic modeling, and machine translation.

Few-Shot Learning Machine Translation +2

Annotation Imputation to Individualize Predictions: Initial Studies on Distribution Dynamics and Model Predictions

1 code implementation24 May 2023 London Lowmanstone, Ruyuan Wan, Risako Owan, Jaehyung Kim, Dongyeop Kang

In our analysis of the results, we found that the choice of imputation method significantly impacts soft label changes and distribution.

Imputation valid

Balancing Effect of Training Dataset Distribution of Multiple Styles for Multi-Style Text Transfer

no code implementations24 May 2023 Debarati Das, David Ma, Dongyeop Kang

This paper explores the impact of training data input diversity on the quality of the generated text from the multi-style transfer model.

Attribute Diversity +2

"Is the Pope Catholic?" Applying Chain-of-Thought Reasoning to Understanding Conversational Implicatures

no code implementations23 May 2023 Zae Myung Kim, David E. Taylor, Dongyeop Kang

Conversational implicatures are pragmatic inferences that require listeners to deduce the intended meaning conveyed by a speaker from their explicit utterances.

Implicatures

CoEdIT: Text Editing by Task-Specific Instruction Tuning

1 code implementation17 May 2023 Vipul Raheja, Dhruv Kumar, Ryan Koo, Dongyeop Kang

We present a large language model fine-tuned on a diverse collection of task-specific instructions for text editing (a total of 82K instructions).

Formality Style Transfer Grammatical Error Correction +5

Decoding the End-to-end Writing Trajectory in Scholarly Manuscripts

1 code implementation31 Mar 2023 Ryan Koo, Anna Martin, Linghe Wang, Dongyeop Kang

We also provide ManuScript, an original dataset annotated with a simplified version of our taxonomy to show writer actions and the intentions behind them.

Text Generation

Cluster-Guided Label Generation in Extreme Multi-Label Classification

1 code implementation17 Feb 2023 Taehee Jung, Joo-Kyung Kim, Sungjin Lee, Dongyeop Kang

For extreme multi-label classification (XMC), existing classification-based models poorly perform for tail labels and often ignore the semantic relations among labels, like treating "Wikipedia" and "Wiki" as independent and separate labels.

Classification Extreme Multi-Label Classification

Everyone's Voice Matters: Quantifying Annotation Disagreement Using Demographic Information

no code implementations12 Jan 2023 Ruyuan Wan, Jaehyung Kim, Dongyeop Kang

Particularly, we extract disagreement labels from the annotators' voting histories in the five subjective datasets, and then fine-tune language models to predict annotators' disagreement.

Quirk or Palmer: A Comparative Study of Modal Verb Frameworks with Annotated Datasets

no code implementations20 Dec 2022 Risako Owan, Maria Gini, Dongyeop Kang

We observe that both frameworks have similar inter-annotator agreements, despite having different numbers of sense types (8 for Quirk and 3 for Palmer).

Natural Language Understanding Sentence

A Comparative Study on Textual Saliency of Styles from Eye Tracking, Annotations, and Language Models

1 code implementation19 Dec 2022 Karin de Langis, Dongyeop Kang

We develop a variety of methods to derive style saliency scores over text using the collected eye dataset.

Few-Shot Learning

Improving Iterative Text Revision by Learning Where to Edit from Other Revision Tasks

1 code implementation2 Dec 2022 Zae Myung Kim, Wanyu Du, Vipul Raheja, Dhruv Kumar, Dongyeop Kang

Leveraging datasets from other related text editing NLP tasks, combined with the specification of editable spans, leads our system to more accurately model the process of iterative text refinement, as evidenced by empirical results and human evaluations.

Grammatical Error Correction Sentence +3

RedPen: Region- and Reason-Annotated Dataset of Unnatural Speech

no code implementations26 Oct 2022 Kyumin Park, Keon Lee, Daeyoung Kim, Dongyeop Kang

We present a novel speech dataset, RedPen, with human annotations on unnatural speech regions and their corresponding reasons.

Speech Synthesis

StyLEx: Explaining Style Using Human Lexical Annotations

1 code implementation14 Oct 2022 Shirley Anugrah Hayati, Kyumin Park, Dheeraj Rajagopal, Lyle Ungar, Dongyeop Kang

Large pre-trained language models have achieved impressive results on various style classification tasks, but they often learn spurious domain-specific words to make predictions (Hayati et al., 2021).

Sentence

Read, Revise, Repeat: A System Demonstration for Human-in-the-loop Iterative Text Revision

1 code implementation In2Writing (ACL) 2022 Wanyu Du, Zae Myung Kim, Vipul Raheja, Dhruv Kumar, Dongyeop Kang

Examining and evaluating the capability of large language models for making continuous revisions and collaborating with human writers is a critical step towards building effective writing assistants.

Understanding Out-of-distribution: A Perspective of Data Dynamics

no code implementations NeurIPS Workshop ICBINB 2021 Dyah Adila, Dongyeop Kang

Despite machine learning models' success in Natural Language Processing (NLP) tasks, predictions from these models frequently fail on out-of-distribution (OOD) samples.

BIG-bench Machine Learning

What Makes Better Augmentation Strategies? Augment Difficult but Not too Different

no code implementations ICLR 2022 Jaehyung Kim, Dongyeop Kang, Sungsoo Ahn, Jinwoo Shin

Remarkably, our method is more effective on the challenging low-data and class-imbalanced regimes, and the learned augmentation policy is well-transferable to the different tasks and models.

Data Augmentation Semantic Similarity +3

Zero-shot Natural Language Video Localization

1 code implementation ICCV 2021 Jinwoo Nam, Daechul Ahn, Dongyeop Kang, Seong Jong Ha, Jonghyun Choi

Understanding videos to localize moments with natural language often requires large expensive annotated video regions paired with language queries.

Image Captioning

Style is NOT a single variable: Case Studies for Cross-Stylistic Language Understanding

1 code implementation ACL 2021 Dongyeop Kang, Eduard Hovy

This paper provides the benchmark corpus (XSLUE) that combines existing datasets and collects a new one for sentence-level cross-style language understanding and evaluation.

Sentence valid

Document-Level Definition Detection in Scholarly Documents: Existing Models, Error Analyses, and Future Directions

1 code implementation EMNLP (sdp) 2020 Dongyeop Kang, Andrew Head, Risham Sidhu, Kyle Lo, Daniel S. Weld, Marti A. Hearst

Based on this analysis, we develop a new definition detection system, HEDDEx, that utilizes syntactic features, transformer encoders, and heuristic filters, and evaluate it on a standard sentence-level benchmark.

Sentence

Plan ahead: Self-Supervised Text Planning for Paragraph Completion Task

no code implementations EMNLP 2020 Dongyeop Kang, Eduard Hovy

To address that, we propose a self-supervised text planner SSPlanner that predicts what to say first (content prediction), then guides the pretrained language model (surface realization) using the predicted content.

Language Modelling Sentence

INSPIRED: Toward Sociable Recommendation Dialog Systems

1 code implementation EMNLP 2020 Shirley Anugrah Hayati, Dongyeop Kang, Qingxiaoyang Zhu, Weiyan Shi, Zhou Yu

To better understand how humans make recommendations in communication, we design an annotation scheme related to recommendation strategies based on social science theories and annotate these dialogs.

Movie Recommendation

Augmenting Scientific Papers with Just-in-Time, Position-Sensitive Definitions of Terms and Symbols

1 code implementation29 Sep 2020 Andrew Head, Kyle Lo, Dongyeop Kang, Raymond Fok, Sam Skjonsberg, Daniel S. Weld, Marti A. Hearst

We introduce ScholarPhi, an augmented reading interface with four novel features: (1) tooltips that surface position-sensitive definitions from elsewhere in a paper, (2) a filter over the paper that "declutters" it to reveal how the term or symbol is used across the paper, (3) automatic equation diagrams that expose multiple definitions in parallel, and (4) an automatically generated glossary of important terms and symbols.

Position

Posterior Calibrated Training on Sentence Classification Tasks

1 code implementation ACL 2020 Taehee Jung, Dongyeop Kang, Hua Cheng, Lucas Mentch, Thomas Schaaf

Here we propose an end-to-end training procedure called posterior calibrated (PosCal) training that directly optimizes the objective while minimizing the difference between the predicted and empirical posterior probabilities. We show that PosCal not only helps reduce the calibration error but also improve task performance by penalizing drops in performance of both objectives.

Classification General Classification +2

Style is NOT a single variable: Case Studies for Cross-Style Language Understanding

2 code implementations9 Nov 2019 Dongyeop Kang, Eduard Hovy

This paper provides the benchmark corpus (xSLUE) that combines existing datasets and collects a new one for sentence-level cross-style language understanding and evaluation.

Sentence valid

Recommendation as a Communication Game: Self-Supervised Bot-Play for Goal-oriented Dialogue

1 code implementation IJCNLP 2019 Dongyeop Kang, Anusha Balakrishnan, Pararth Shah, Paul Crook, Y-Lan Boureau, Jason Weston

These issues can be alleviated by treating recommendation as an interactive dialogue task instead, where an expert recommender can sequentially ask about someone's preferences, react to their requests, and recommend more appropriate items.

Recommendation Systems

Earlier Isn't Always Better: Sub-aspect Analysis on Corpus and System Biases in Summarization

1 code implementation IJCNLP 2019 Taehee Jung, Dongyeop Kang, Lucas Mentch, Eduard Hovy

We find that while position exhibits substantial bias in news articles, this is not the case, for example, with academic papers and meeting minutes.

Diversity News Summarization +1

Linguistic Versus Latent Relations for Modeling Coherent Flow in Paragraphs

1 code implementation IJCNLP 2019 Dongyeop Kang, Hiroaki Hayashi, Alan W. black, Eduard Hovy

In order to produce a coherent flow of text, we explore two forms of intersentential relations in a paragraph: one is a human-created linguistical relation that forms a structure (e. g., discourse tree) and the other is a relation from latent representation learned from the sentences themselves.

Language Modelling Relation

Bridging Knowledge Gaps in Neural Entailment via Symbolic Models

no code implementations EMNLP 2018 Dongyeop Kang, Tushar Khot, Ashish Sabharwal, Peter Clark

We focus on filling these knowledge gaps in the Science Entailment task, by leveraging an external structured knowledge base (KB) of science facts.

Natural Language Inference

AdvEntuRe: Adversarial Training for Textual Entailment with Knowledge-Guided Examples

1 code implementation ACL 2018 Dongyeop Kang, Tushar Khot, Ashish Sabharwal, Eduard Hovy

We consider the problem of learning textual entailment models with limited supervision (5K-10K training examples), and present two complementary approaches for it.

Natural Language Inference Negation

A Dataset of Peer Reviews (PeerRead): Collection, Insights and NLP Applications

1 code implementation NAACL 2018 Dongyeop Kang, Waleed Ammar, Bhavana Dalvi, Madeleine van Zuylen, Sebastian Kohlmeier, Eduard Hovy, Roy Schwartz

In the first task, we show that simple models can predict whether a paper is accepted with up to 21% error reduction compared to the majority baseline.

Detecting and Explaining Causes From Text For a Time Series Event

1 code implementation EMNLP 2017 Dongyeop Kang, Varun Gangal, Ang Lu, Zheng Chen, Eduard Hovy

Our quantitative and human analysis show empirical evidence that our method successfully extracts meaningful causality relationships between time series with textual features and generates appropriate explanation between them.

Time Series Time Series Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.