Search Results for author: Kyomin Jung

Found 71 papers, 22 papers with code

Contrastive Learning for Context-aware Neural Machine Translation Using Coreference Information

no code implementations WMT (EMNLP) 2021 Yongkeun Hwang, Hyeongu Yun, Kyomin Jung

Context-aware neural machine translation (NMT) incorporates contextual information of surrounding texts, that can improve the translation quality of document-level machine translation.

Contrastive Learning coreference-resolution +6

Modality Alignment between Deep Representations for Effective Video-and-Language Learning

no code implementations LREC 2022 Hyeongu Yun, Yongil Kim, Kyomin Jung

Our method directly optimizes CKA to make an alignment between video and text embedding representations, hence it aids the cross-modality attention module to combine information over different modalities.

Question Answering Video Captioning +1

Learning to Select Question-Relevant Relations for Visual Question Answering

no code implementations NAACL (maiworkshop) 2021 Jaewoong Lee, Heejoon Lee, Hwanhee Lee, Kyomin Jung

Previous existing visual question answering (VQA) systems commonly use graph neural networks(GNNs) to extract visual relationships such as semantic relations or spatial relations.

Graph Attention Question Answering +2

Are LLM-Judges Robust to Expressions of Uncertainty? Investigating the effect of Epistemic Markers on LLM-based Evaluation

no code implementations28 Oct 2024 Dongryeol Lee, Yerin Hwang, Yongil Kim, Joonsuk Park, Kyomin Jung

In line with the principle of honesty, there has been a growing effort to train large language models (LLMs) to generate outputs containing epistemic markers.

SWITCH: Studying with Teacher for Knowledge Distillation of Large Language Models

no code implementations25 Oct 2024 Jahyun Koo, Yerin Hwang, Yongil Kim, Taegwan Kang, Hyunkyung Bae, Kyomin Jung

To mitigate these challenges, we propose SWITCH (Studying WIth TeaCHer for Knowledge Distillation), a novel approach that strategically incorporates the teacher model during the student's sequence generation.

Instruction Following Knowledge Distillation +1

Mitigating Hallucinations in Large Vision-Language Models via Summary-Guided Decoding

no code implementations17 Oct 2024 Kyungmin Min, Minbeom Kim, Kang-il Lee, Dongryeol Lee, Kyomin Jung

Lastly, we observe that although existing methods struggle to balance the reduction of object hallucinations with maintaining text quality, SGD demonstrates robustness in handling this challenge.

Hallucination Object Hallucination +1

Guaranteed Generation from Large Language Models

no code implementations9 Oct 2024 Minbeom Kim, Thibaut Thonet, Jos Rozen, Hwaran Lee, Kyomin Jung, Marc Dymetman

These experiments show that GUARD achieves perfect constraint satisfaction while almost preserving the ideal distribution with highly improved inference efficiency.

Text Generation

A Character-Centric Creative Story Generation via Imagination

no code implementations25 Sep 2024 Kyeongman Park, Minbeom Kim, Kyomin Jung

To address this, we introduce a novel story generation framework called CCI (Character-centric Creative story generation via Imagination).

Diversity Story Generation

Persona is a Double-edged Sword: Mitigating the Negative Impact of Role-playing Prompts in Zero-shot Reasoning Tasks

no code implementations16 Aug 2024 Junseok Kim, Nakyeong Yang, Kyomin Jung

Then, Jekyll \& Hyde collects two potential solutions from role-playing and neutral prompts and selects a better solution using the LLM evaluator.

Position

Fine-grained Gender Control in Machine Translation with Large Language Models

no code implementations21 Jul 2024 Minwoo Lee, Hyukhun Koh, Minsung Kim, Kyomin Jung

In this paper, we tackle controlled translation in a more realistic setting of inputs with multiple entities and propose Gender-of-Entity (GoE) prompting method for LLMs.

Machine Translation Sentence +1

VLind-Bench: Measuring Language Priors in Large Vision-Language Models

1 code implementation13 Jun 2024 Kang-il Lee, Minbeom Kim, Seunghyun Yoon, Minsung Kim, Dongryeol Lee, Hyukhun Koh, Kyomin Jung

To this end, we propose a new benchmark called VLind-Bench, which is the first benchmark specifically designed to measure the language priors, or blindness, of LVLMs.

counterfactual

Return of EM: Entity-driven Answer Set Expansion for QA Evaluation

no code implementations24 Apr 2024 Dongryeol Lee, Minwoo Lee, Kyungmin Min, Joonsuk Park, Kyomin Jung

Recently, directly using large language models (LLMs) has been shown to be the most reliable method to evaluate QA models.

Skeleton: A New Framework for Accelerating Language Models via Task Neuron Localized Prompt Tuning

no code implementations18 Apr 2024 Nakyeong Yang, Jiwon Moon, Junseok Kim, Yunah Jang, Kyomin Jung

Prompt tuning methods have shown comparable performance to general training methods as parameter-efficient fine-tuning (PEFT) methods in various natural language understanding tasks.

Language Modelling Natural Language Understanding +1

AdvisorQA: Towards Helpful and Harmless Advice-seeking Question Answering with Collective Intelligence

no code implementations18 Apr 2024 Minbeom Kim, Hwanhee Lee, Joonsuk Park, Hwaran Lee, Kyomin Jung

Therefore, we've completed a benchmark encompassing daily life questions, diverse corresponding responses, and majority vote ranking to train our helpfulness metric.

Question Answering

Can LLMs Recognize Toxicity? A Structured Investigation Framework and Toxicity Metric

no code implementations10 Feb 2024 Hyukhun Koh, Dohyung Kim, Minwoo Lee, Kyomin Jung

In the pursuit of developing Large Language Models (LLMs) that adhere to societal standards, it is imperative to detect the toxicity in the generated text.

LifeTox: Unveiling Implicit Toxicity in Life Advice

no code implementations16 Nov 2023 Minbeom Kim, Jahyun Koo, Hwanhee Lee, Joonsuk Park, Hwaran Lee, Kyomin Jung

As large language models become increasingly integrated into daily life, detecting implicit toxicity across diverse contexts is crucial.

Mitigating Biases for Instruction-following Language Models via Bias Neurons Elimination

no code implementations16 Nov 2023 Nakyeong Yang, Taegwan Kang, JungKyu Choi, Honglak Lee, Kyomin Jung

Furthermore, we propose a novel and practical bias mitigation method, CRISPR, to eliminate bias neurons of language models in instruction-following settings.

Instruction Following Language Modelling

IterCQR: Iterative Conversational Query Reformulation with Retrieval Guidance

1 code implementation16 Nov 2023 Yunah Jang, Kang-il Lee, Hyunkyung Bae, Hwanhee Lee, Kyomin Jung

To address these challenges, we propose Iterative Conversational Query Reformulation (IterCQR), a methodology that conducts query reformulation without relying on human rewrites.

Conversational Search Information Retrieval +1

Dialogizer: Context-aware Conversational-QA Dataset Generation from Textual Sources

no code implementations9 Nov 2023 Yerin Hwang, Yongil Kim, Hyunkyung Bae, Jeesoo Bang, Hwanhee Lee, Kyomin Jung

To address the data scarcity issue in Conversational question answering (ConvQA), a dialog inpainting method, which utilizes documents to generate ConvQA datasets, has been proposed.

Conversational Question Answering Re-Ranking

Weakly Supervised Semantic Parsing with Execution-based Spurious Program Filtering

1 code implementation2 Nov 2023 Kang-il Lee, Segwang Kim, Kyomin Jung

The problem of spurious programs is a longstanding challenge when training a semantic parser from weak supervision.

Semantic Parsing Visual Reasoning

DPP-TTS: Diversifying prosodic features of speech via determinantal point processes

no code implementations23 Oct 2023 Seongho Joo, Hyukhun Koh, Kyomin Jung

Second, the diversity among samples is neglected since the sampling procedure often focuses on a single speech sample rather than multiple ones.

Diversity Point Processes +1

MVMR: A New Framework for Evaluating Faithfulness of Video Moment Retrieval against Multiple Distractors

1 code implementation15 Aug 2023 Nakyeong Yang, Minsung Kim, Seunghyun Yoon, Joongbo Shin, Kyomin Jung

However, the existing VMR framework evaluates video moment retrieval performance, assuming that a video is given, which may not reveal whether the models exhibit overconfidence in the falsely given video.

Contrastive Learning Misinformation +4

Asking Clarification Questions to Handle Ambiguity in Open-Domain QA

1 code implementation23 May 2023 Dongryeol Lee, Segwang Kim, Minwoo Lee, Hwanhee Lee, Joonsuk Park, Sang-Woo Lee, Kyomin Jung

We first present CAMBIGNQ, a dataset consisting of 5, 654 ambiguous questions, each with relevant passages, possible answers, and a clarification question.

Open-Domain Question Answering

Target-Agnostic Gender-Aware Contrastive Learning for Mitigating Bias in Multilingual Machine Translation

no code implementations23 May 2023 Minwoo Lee, Hyukhun Koh, Kang-il Lee, Dongdong Zhang, Minsung Kim, Kyomin Jung

In this paper, we specifically target the gender bias issue of multilingual machine translation models for unambiguous cases where there is a single correct translation, and propose a bias mitigation method based on a novel approach.

Contrastive Learning Machine Translation +1

Multi-View Zero-Shot Open Intent Induction from Dialogues: Multi Domain Batch and Proxy Gradient Transfer

no code implementations23 Mar 2023 Hyukhun Koh, Haesung Pyun, Nakyeong Yang, Kyomin Jung

In Task Oriented Dialogue (TOD) system, detecting and inducing new intents are two main challenges to apply the system in the real world.

PR-MCS: Perturbation Robust Metric for MultiLingual Image Captioning

no code implementations15 Mar 2023 Yongil Kim, Yerin Hwang, Hyeongu Yun, Seunghyun Yoon, Trung Bui, Kyomin Jung

Vulnerability to lexical perturbation is a critical weakness of automatic evaluation metrics for image captioning.

Image Captioning

Varianceflow: High-Quality and Controllable Text-to-Speech using Variance Information via Normalizing Flow

no code implementations27 Feb 2023 Yoonhyung Lee, Jinhyeok Yang, Kyomin Jung

Also, the objective function of NF makes the model use the variance information and the text in a disentangled manner resulting in more precise variance control.

Text to Speech

Critic-Guided Decoding for Controlled Text Generation

no code implementations21 Dec 2022 Minbeom Kim, Hwanhee Lee, Kang Min Yoo, Joonsuk Park, Hwaran Lee, Kyomin Jung

In this work, we propose a novel critic decoding method for controlled language generation (CriticControl) that combines the strengths of reinforcement learning and weighted decoding.

Language Modelling reinforcement-learning +3

Multimodal Speech Emotion Recognition using Cross Attention with Aligned Audio and Text

no code implementations26 Jul 2022 Yoonhyung Lee, Seunghyun Yoon, Kyomin Jung

Then, the attention weights of each modality are applied directly to the other modality in a crossed way, so that the CAN gathers the audio and text information from the same time steps based on each modality.

Speech Emotion Recognition

Task-specific Compression for Multi-task Language Models using Attribution-based Pruning

no code implementations9 May 2022 Nakyeong Yang, Yunah Jang, Hwanhee Lee, Seohyeong Jung, Kyomin Jung

However, these language models utilize an unnecessarily large number of model parameters, even when used only for a specific task.

Natural Language Understanding

Self-Adapter at SemEval-2021 Task 10: Entropy-based Pseudo-Labeler for Source-free Domain Adaptation

no code implementations SEMEVAL 2021 Sangwon Yoon, Yanghoon Kim, Kyomin Jung

Source-free domain adaptation is an emerging line of work in deep learning research since it is closely related to the real-world environment.

Sentence Source-Free Domain Adaptation

UMIC: An Unreferenced Metric for Image Captioning via Contrastive Learning

1 code implementation ACL 2021 Hwanhee Lee, Seunghyun Yoon, Franck Dernoncourt, Trung Bui, Kyomin Jung

Also, we observe critical problems of the previous benchmark dataset (i. e., human annotations) on image captioning metric, and introduce a new collection of human annotations on the generated captions.

Contrastive Learning Diversity +2

Neural Sequence-to-grid Module for Learning Symbolic Rules

1 code implementation13 Jan 2021 Segwang Kim, Hyoungwook Nam, Joonyoung Kim, Kyomin Jung

Logical reasoning tasks over symbols, such as learning arithmetic operations and computer program evaluations, have become challenges to deep learning.

Logical Reasoning

Bidirectional Variational Inference for Non-Autoregressive Text-to-Speech

1 code implementation ICLR 2021 Yoonhyung Lee, Joongbo Shin, Kyomin Jung

Although early text-to-speech (TTS) models such as Tacotron 2 have succeeded in generating human-like speech, their autoregressive (AR) architectures have a limitation that they require a lot of time to generate a mel-spectrogram consisting of hundreds of steps.

Text to Speech Variational Inference

Collaborative Training of GANs in Continuous and Discrete Spaces for Text Generation

no code implementations16 Oct 2020 Yanghoon Kim, Seungpil Won, Seunghyun Yoon, Kyomin Jung

Applying generative adversarial networks (GANs) to text-related tasks is challenging due to the discrete nature of language.

Diversity Reinforcement Learning (RL) +1

Fast and Accurate Deep Bidirectional Language Representations for Unsupervised Learning

1 code implementation ACL 2020 Joongbo Shin, Yoonhyung Lee, Seunghyun Yoon, Kyomin Jung

Even though BERT achieves successful performance improvements in various supervised learning tasks, applying BERT for unsupervised tasks still holds a limitation that it requires repetitive inference for computing contextual language representations.

Language Modelling Semantic Similarity +1

DSTC8-AVSD: Multimodal Semantic Transformer Network with Retrieval Style Word Generator

no code implementations1 Apr 2020 Hwanhee Lee, Seunghyun Yoon, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Kyomin Jung

Audio Visual Scene-aware Dialog (AVSD) is the task of generating a response for a question with a given scene, video, audio, and the history of previous turns in the dialog.

Decoder Retrieval +1

BaitWatcher: A lightweight web interface for the detection of incongruent news headlines

no code implementations23 Mar 2020 Kunwoo Park, Taegyun Kim, Seunghyun Yoon, Meeyoung Cha, Kyomin Jung

In digital environments where substantial amounts of information are shared online, news headlines play essential roles in the selection and diffusion of news articles.

AI Agent Misinformation

Attentive Modality Hopping Mechanism for Speech Emotion Recognition

1 code implementation29 Nov 2019 Seunghyun Yoon, Subhadeep Dey, Hwanhee Lee, Kyomin Jung

In this work, we explore the impact of visual modality in addition to speech and text for improving the accuracy of the emotion detection system.

Emotion Classification Multimodal Emotion Recognition +1

Propagate-Selector: Detecting Supporting Sentences for Question Answering via Graph Neural Networks

1 code implementation LREC 2020 Seunghyun Yoon, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Kyomin Jung

In this study, we propose a novel graph neural network called propagate-selector (PS), which propagates information over sentences to understand information that cannot be inferred when considering sentences in isolation.

Answer Selection Graph Neural Network +1

MILAB at SemEval-2019 Task 3: Multi-View Turn-by-Turn Model for Context-Aware Sentiment Analysis

no code implementations SEMEVAL 2019 Yoonhyung Lee, Yanghoon Kim, Kyomin Jung

This paper describes our system for SemEval-2019 Task 3: EmoContext, which aims to predict the emotion of the third utterance considering two preceding utterances in a dialogue.

Sentiment Analysis

A Compare-Aggregate Model with Latent Clustering for Answer Selection

no code implementations30 May 2019 Seunghyun Yoon, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Kyomin Jung

In this paper, we propose a novel method for a sentence-level answer-selection task that is a fundamental problem in natural language processing.

Answer Selection Clustering +3

Effective Sentence Scoring Method using Bidirectional Language Model for Speech Recognition

no code implementations16 May 2019 Joongbo Shin, Yoonhyung Lee, Kyomin Jung

Recent studies have tried to use bidirectional LMs (biLMs) instead of conventional unidirectional LMs (uniLMs) for rescoring the $N$-best list decoded from the acoustic model.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Detecting Incongruity Between News Headline and Body Text via a Deep Hierarchical Encoder

2 code implementations17 Nov 2018 Seunghyun Yoon, Kunwoo Park, Joongbo Shin, Hongjun Lim, Seungpil Won, Meeyoung Cha, Kyomin Jung

Some news headlines mislead readers with overrated or false information, and identifying them in advance will better assist readers in choosing proper news stories to consume.

Data Augmentation Fake News Detection +2

Multimodal Speech Emotion Recognition Using Audio and Text

4 code implementations10 Oct 2018 Seunghyun Yoon, Seokhyun Byun, Kyomin Jung

Speech emotion recognition is a challenging task, and extensive reliance has been placed on models that use audio features in building well-performing classifiers.

Emotion Classification Multimodal Emotion Recognition +2

Improving Neural Question Generation using Answer Separation

1 code implementation7 Sep 2018 Yanghoon Kim, Hwanhee Lee, Joongbo Shin, Kyomin Jung

Previous NQG models suffer from a problem that a significant proportion of the generated questions include words in the question target, resulting in the generation of unintended questions.

Question Generation Question-Generation

Number Sequence Prediction Problems for Evaluating Computational Powers of Neural Networks

no code implementations19 May 2018 Hyoungwook Nam, Segwang Kim, Kyomin Jung

We define the complexity and difficulty of a number sequence prediction task with the structure of the smallest automaton that can generate the sequence.

Learning to Rank Question-Answer Pairs using Hierarchical Recurrent Encoder with Latent Topic Clustering

3 code implementations NAACL 2018 Seunghyun Yoon, Joongbo Shin, Kyomin Jung

In this paper, we propose a novel end-to-end neural architecture for ranking candidate answers, that adapts a hierarchical recurrent neural network and a latent topic clustering module.

Answer Selection Clustering +1

Efficient Transfer Learning Schemes for Personalized Language Modeling using Recurrent Neural Network

no code implementations13 Jan 2017 Seunghyun Yoon, Hyeongu Yun, Yuna Kim, Gyu-tae Park, Kyomin Jung

In this paper, we propose an efficient transfer leaning methods for training a personalized language model using a recurrent neural network with long short-term memory architecture.

Language Modelling Transfer Learning

Partition-Merge: Distributed Inference and Modularity Optimization

no code implementations24 Sep 2013 Vincent Blondel, Kyomin Jung, Pushmeet Kohli, Devavrat Shah

This paper presents a novel meta algorithm, Partition-Merge (PM), which takes existing centralized algorithms for graph computation and makes them distributed and faster.

Community Detection

Efficient Energy Minimization for Enforcing Statistics

no code implementations30 Jul 2013 Yongsub Lim, Kyomin Jung, Pushmeet Kohli

However, for many computer vision problems, the MAP solution under the model is not the ground truth solution.

Image Segmentation Segmentation +1

Multi-dimensional Parametric Mincuts for Constrained MAP Inference

no code implementations30 Jul 2013 Yongsub Lim, Kyomin Jung, Pushmeet Kohli

We show how this constrained discrete optimization problem can be formulated as a multi-dimensional parametric mincut problem via its Lagrangian dual, and prove that our algorithm isolates all constraint instances for which the problem can be solved exactly.

Image Segmentation Semantic Segmentation

Local Rules for Global MAP: When Do They Work ?

no code implementations NeurIPS 2009 Kyomin Jung, Pushmeet Kohli, Devavrat Shah

We consider the question of computing Maximum A Posteriori (MAP) assignment in an arbitrary pair-wise Markov Random Field (MRF).

Local Algorithms for Approximate Inference in Minor-Excluded Graphs

no code implementations NeurIPS 2007 Kyomin Jung, Devavrat Shah

We present a new local approximation algorithm for computing MAP and log-partition function for arbitrary exponential family distribution represented by a finite-valued pair-wise Markov random field (MRF), say G. Our algorithm is based on decomposing G into appropriately chosen small components; computing estimates locally in each of these components and then producing a good global solution.

Cannot find the paper you are looking for? You can Submit a new open access paper.