no code implementations • WMT (EMNLP) 2021 • Yongkeun Hwang, Hyeongu Yun, Kyomin Jung
Context-aware neural machine translation (NMT) incorporates contextual information of surrounding texts, that can improve the translation quality of document-level machine translation.
no code implementations • LREC 2022 • Hyeongu Yun, Yongil Kim, Kyomin Jung
Our method directly optimizes CKA to make an alignment between video and text embedding representations, hence it aids the cross-modality attention module to combine information over different modalities.
no code implementations • NAACL (maiworkshop) 2021 • Jaewoong Lee, Heejoon Lee, Hwanhee Lee, Kyomin Jung
Previous existing visual question answering (VQA) systems commonly use graph neural networks(GNNs) to extract visual relationships such as semantic relations or spatial relations.
1 code implementation • EMNLP (Eval4NLP) 2020 • Hwanhee Lee, Seunghyun Yoon, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Kyomin Jung
In this paper, we propose an evaluation metric for image captioning systems using both image and text information.
no code implementations • dialdoc (ACL) 2022 • Yunah Jang, Dongryeol Lee, Hyung Joo Park, Taegwan Kang, Hwanhee Lee, Hyunkyung Bae, Kyomin Jung
In this paper, we mainly discuss about our submission to MultiDoc2Dial task, which aims to model the goal-oriented dialogues grounded in multiple documents.
no code implementations • 10 Nov 2024 • Hyukhun Koh, Minha Jhang, Dohyung Kim, Sangmook Lee, Kyomin Jung
Recently, discrete diffusion language models have demonstrated promising results in NLP.
no code implementations • 28 Oct 2024 • Dongryeol Lee, Yerin Hwang, Yongil Kim, Joonsuk Park, Kyomin Jung
In line with the principle of honesty, there has been a growing effort to train large language models (LLMs) to generate outputs containing epistemic markers.
no code implementations • 25 Oct 2024 • Jahyun Koo, Yerin Hwang, Yongil Kim, Taegwan Kang, Hyunkyung Bae, Kyomin Jung
To mitigate these challenges, we propose SWITCH (Studying WIth TeaCHer for Knowledge Distillation), a novel approach that strategically incorporates the teacher model during the student's sequence generation.
no code implementations • 17 Oct 2024 • Kyungmin Min, Minbeom Kim, Kang-il Lee, Dongryeol Lee, Kyomin Jung
Lastly, we observe that although existing methods struggle to balance the reduction of object hallucinations with maintaining text quality, SGD demonstrates robustness in handling this challenge.
no code implementations • 9 Oct 2024 • Minbeom Kim, Thibaut Thonet, Jos Rozen, Hwaran Lee, Kyomin Jung, Marc Dymetman
These experiments show that GUARD achieves perfect constraint satisfaction while almost preserving the ideal distribution with highly improved inference efficiency.
no code implementations • 25 Sep 2024 • Kyeongman Park, Minbeom Kim, Kyomin Jung
To address this, we introduce a novel story generation framework called CCI (Character-centric Creative story generation via Imagination).
no code implementations • 16 Aug 2024 • Junseok Kim, Nakyeong Yang, Kyomin Jung
Then, Jekyll \& Hyde collects two potential solutions from role-playing and neutral prompts and selects a better solution using the LLM evaluator.
no code implementations • 21 Jul 2024 • Minwoo Lee, Hyukhun Koh, Minsung Kim, Kyomin Jung
In this paper, we tackle controlled translation in a more realistic setting of inputs with multiple entities and propose Gender-of-Entity (GoE) prompting method for LLMs.
1 code implementation • 13 Jun 2024 • Kang-il Lee, Minbeom Kim, Seunghyun Yoon, Minsung Kim, Dongryeol Lee, Hyukhun Koh, Kyomin Jung
To this end, we propose a new benchmark called VLind-Bench, which is the first benchmark specifically designed to measure the language priors, or blindness, of LVLMs.
no code implementations • 24 Apr 2024 • Dongryeol Lee, Minwoo Lee, Kyungmin Min, Joonsuk Park, Kyomin Jung
Recently, directly using large language models (LLMs) has been shown to be the most reliable method to evaluate QA models.
no code implementations • 18 Apr 2024 • Nakyeong Yang, Jiwon Moon, Junseok Kim, Yunah Jang, Kyomin Jung
Prompt tuning methods have shown comparable performance to general training methods as parameter-efficient fine-tuning (PEFT) methods in various natural language understanding tasks.
no code implementations • 18 Apr 2024 • Minbeom Kim, Hwanhee Lee, Joonsuk Park, Hwaran Lee, Kyomin Jung
Therefore, we've completed a benchmark encompassing daily life questions, diverse corresponding responses, and majority vote ranking to train our helpfulness metric.
no code implementations • 9 Mar 2024 • Yerin Hwang, Yongil Kim, Yunah Jang, Jeesoo Bang, Hyunkyung Bae, Kyomin Jung
Through quantitative and qualitative experiments, we demonstrate MP2D's efficacy in generating dialogue with natural topic shifts.
no code implementations • 10 Feb 2024 • Hyukhun Koh, Dohyung Kim, Minwoo Lee, Kyomin Jung
In the pursuit of developing Large Language Models (LLMs) that adhere to societal standards, it is imperative to detect the toxicity in the generated text.
no code implementations • 26 Nov 2023 • Kyeongman Park, Nakyeong Yang, Kyomin Jung
A human author can write any length of story without losing coherence.
no code implementations • 16 Nov 2023 • Minbeom Kim, Jahyun Koo, Hwanhee Lee, Joonsuk Park, Hwaran Lee, Kyomin Jung
As large language models become increasingly integrated into daily life, detecting implicit toxicity across diverse contexts is crucial.
no code implementations • 16 Nov 2023 • Nakyeong Yang, Taegwan Kang, JungKyu Choi, Honglak Lee, Kyomin Jung
Furthermore, we propose a novel and practical bias mitigation method, CRISPR, to eliminate bias neurons of language models in instruction-following settings.
1 code implementation • 16 Nov 2023 • Yunah Jang, Kang-il Lee, Hyunkyung Bae, Hwanhee Lee, Kyomin Jung
To address these challenges, we propose Iterative Conversational Query Reformulation (IterCQR), a methodology that conducts query reformulation without relying on human rewrites.
no code implementations • 9 Nov 2023 • Yerin Hwang, Yongil Kim, Hyunkyung Bae, Jeesoo Bang, Hwanhee Lee, Kyomin Jung
To address the data scarcity issue in Conversational question answering (ConvQA), a dialog inpainting method, which utilizes documents to generate ConvQA datasets, has been proposed.
1 code implementation • 2 Nov 2023 • Kang-il Lee, Segwang Kim, Kyomin Jung
The problem of spurious programs is a longstanding challenge when training a semantic parser from weak supervision.
no code implementations • 23 Oct 2023 • Seongho Joo, Hyukhun Koh, Kyomin Jung
Second, the diversity among samples is neglected since the sampling procedure often focuses on a single speech sample rather than multiple ones.
1 code implementation • 15 Aug 2023 • Nakyeong Yang, Minsung Kim, Seunghyun Yoon, Joongbo Shin, Kyomin Jung
However, the existing VMR framework evaluates video moment retrieval performance, assuming that a video is given, which may not reveal whether the models exhibit overconfidence in the falsely given video.
1 code implementation • 23 May 2023 • Dongryeol Lee, Segwang Kim, Minwoo Lee, Hwanhee Lee, Joonsuk Park, Sang-Woo Lee, Kyomin Jung
We first present CAMBIGNQ, a dataset consisting of 5, 654 ambiguous questions, each with relevant passages, possible answers, and a clarification question.
no code implementations • 23 May 2023 • Minwoo Lee, Hyukhun Koh, Kang-il Lee, Dongdong Zhang, Minsung Kim, Kyomin Jung
In this paper, we specifically target the gender bias issue of multilingual machine translation models for unambiguous cases where there is a single correct translation, and propose a bias mitigation method based on a novel approach.
no code implementations • 23 Mar 2023 • Hyukhun Koh, Haesung Pyun, Nakyeong Yang, Kyomin Jung
In Task Oriented Dialogue (TOD) system, detecting and inducing new intents are two main challenges to apply the system in the real world.
no code implementations • 15 Mar 2023 • Yongil Kim, Yerin Hwang, Hyeongu Yun, Seunghyun Yoon, Trung Bui, Kyomin Jung
Vulnerability to lexical perturbation is a critical weakness of automatic evaluation metrics for image captioning.
no code implementations • 27 Feb 2023 • Yoonhyung Lee, Jinhyeok Yang, Kyomin Jung
Also, the objective function of NF makes the model use the variance information and the text in a disentangled manner resulting in more precise variance control.
no code implementations • 21 Dec 2022 • Minbeom Kim, Hwanhee Lee, Kang Min Yoo, Joonsuk Park, Hwaran Lee, Kyomin Jung
In this work, we propose a novel critic decoding method for controlled language generation (CriticControl) that combines the strengths of reinforcement learning and weighted decoding.
no code implementations • 26 Jul 2022 • Yoonhyung Lee, Seunghyun Yoon, Kyomin Jung
Then, the attention weights of each modality are applied directly to the other modality in a crossed way, so that the CAN gathers the audio and text information from the same time steps based on each modality.
no code implementations • 9 May 2022 • Nakyeong Yang, Yunah Jang, Hwanhee Lee, Seohyeong Jung, Kyomin Jung
However, these language models utilize an unnecessarily large number of model parameters, even when used only for a specific task.
1 code implementation • Findings (NAACL) 2022 • Hwanhee Lee, Kang Min Yoo, Joonsuk Park, Hwaran Lee, Kyomin Jung
To this end, the latest approach is to train a factual consistency classifier on factually consistent and inconsistent summaries.
no code implementations • 18 Apr 2022 • Hwanhee Lee, Cheoneum Park, Seunghyun Yoon, Trung Bui, Franck Dernoncourt, Juae Kim, Kyomin Jung
In this paper, we propose an efficient factual error correction system RFEC based on entities retrieval post-editing process.
1 code implementation • 30 Sep 2021 • Minwoo Lee, Seungpil Won, Juae Kim, Hwanhee Lee, Cheoneum Park, Kyomin Jung
Specifically, we employ a two-stage augmentation pipeline to generate new claims and evidences from existing samples.
no code implementations • 13 Sep 2021 • Yongkeun Hwang, Hyungu Yun, Kyomin Jung
We experimented with our method on common context-aware NMT models and two document-level translation tasks.
1 code implementation • Findings (EMNLP) 2021 • Hwanhee Lee, Thomas Scialom, Seunghyun Yoon, Franck Dernoncourt, Kyomin Jung
A Visual-QA system is necessary for QACE-Img.
no code implementations • SEMEVAL 2021 • Sangwon Yoon, Yanghoon Kim, Kyomin Jung
Source-free domain adaptation is an emerging line of work in deep learning research since it is closely related to the real-world environment.
1 code implementation • ACL 2021 • Hwanhee Lee, Seunghyun Yoon, Franck Dernoncourt, Trung Bui, Kyomin Jung
Also, we observe critical problems of the previous benchmark dataset (i. e., human annotations) on image captioning metric, and introduce a new collection of human annotations on the generated captions.
1 code implementation • 13 Jan 2021 • Segwang Kim, Hyoungwook Nam, Joonyoung Kim, Kyomin Jung
Logical reasoning tasks over symbols, such as learning arithmetic operations and computer program evaluations, have become challenges to deep learning.
1 code implementation • ICLR 2021 • Yoonhyung Lee, Joongbo Shin, Kyomin Jung
Although early text-to-speech (TTS) models such as Tacotron 2 have succeeded in generating human-like speech, their autoregressive (AR) architectures have a limitation that they require a lot of time to generate a mel-spectrogram consisting of hundreds of steps.
no code implementations • 16 Oct 2020 • Yanghoon Kim, Seungpil Won, Seunghyun Yoon, Kyomin Jung
Applying generative adversarial networks (GANs) to text-related tasks is challenging due to the discrete nature of language.
1 code implementation • NAACL 2021 • Hwanhee Lee, Seunghyun Yoon, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Joongbo Shin, Kyomin Jung
To evaluate our metric, we create high-quality human judgments of correctness on two GenQA datasets.
1 code implementation • ACL 2020 • Joongbo Shin, Yoonhyung Lee, Seunghyun Yoon, Kyomin Jung
Even though BERT achieves successful performance improvements in various supervised learning tasks, applying BERT for unsupervised tasks still holds a limitation that it requires repetitive inference for computing contextual language representations.
no code implementations • 1 Apr 2020 • Heeyoung Kwak, Minwoo Lee, Seunghyun Yoon, Jooyoung Chang, Sangmin Park, Kyomin Jung
In this study, we develop a novel graph-based framework for ADR signal detection using healthcare claims data.
no code implementations • 1 Apr 2020 • Hwanhee Lee, Seunghyun Yoon, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Kyomin Jung
Audio Visual Scene-aware Dialog (AVSD) is the task of generating a response for a question with a given scene, video, audio, and the history of previous turns in the dialog.
no code implementations • 23 Mar 2020 • Kunwoo Park, Taegyun Kim, Seunghyun Yoon, Meeyoung Cha, Kyomin Jung
In digital environments where substantial amounts of information are shared online, news headlines play essential roles in the selection and diffusion of news articles.
1 code implementation • 29 Nov 2019 • Seunghyun Yoon, Subhadeep Dey, Hwanhee Lee, Kyomin Jung
In this work, we explore the impact of visual modality in addition to speech and text for improving the accuracy of the emotion detection system.
1 code implementation • LREC 2020 • Seunghyun Yoon, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Kyomin Jung
In this study, we propose a novel graph neural network called propagate-selector (PS), which propagates information over sentences to understand information that cannot be inferred when considering sentences in isolation.
no code implementations • WS 2019 • Jiin Nam, Seunghyun Yoon, Kyomin Jung
While deep learning techniques have shown promising results in many natural language processing (NLP) tasks, it has not been widely applied to the clinical domain.
no code implementations • SEMEVAL 2019 • Yoonhyung Lee, Yanghoon Kim, Kyomin Jung
This paper describes our system for SemEval-2019 Task 3: EmoContext, which aims to predict the emotion of the third utterance considering two preceding utterances in a dialogue.
no code implementations • 30 May 2019 • Seunghyun Yoon, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Kyomin Jung
In this paper, we propose a novel method for a sentence-level answer-selection task that is a fundamental problem in natural language processing.
Ranked #8 on Question Answering on TrecQA
no code implementations • 16 May 2019 • Joongbo Shin, Yoonhyung Lee, Kyomin Jung
Recent studies have tried to use bidirectional LMs (biLMs) instead of conventional unidirectional LMs (uniLMs) for rescoring the $N$-best list decoded from the acoustic model.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
2 code implementations • 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2019 • Seunghyun Yoon, Seokhyun Byun, Subhadeep Dey, Kyomin Jung
As opposed to using knowledge from both the modalities separately, we propose a framework to exploit acoustic information in tandem with lexical data.
2 code implementations • 17 Nov 2018 • Seunghyun Yoon, Kunwoo Park, Joongbo Shin, Hongjun Lim, Seungpil Won, Meeyoung Cha, Kyomin Jung
Some news headlines mislead readers with overrated or false information, and identifying them in advance will better assist readers in choosing proper news stories to consume.
4 code implementations • 10 Oct 2018 • Seunghyun Yoon, Seokhyun Byun, Kyomin Jung
Speech emotion recognition is a challenging task, and extensive reliance has been placed on models that use audio features in building well-performing classifiers.
1 code implementation • 7 Sep 2018 • Yanghoon Kim, Hwanhee Lee, Joongbo Shin, Kyomin Jung
Previous NQG models suffer from a problem that a significant proportion of the generated questions include words in the question target, resulting in the generation of unintended questions.
4 code implementations • WS 2018 • Younghun Lee, Seunghyun Yoon, Kyomin Jung
However, this dataset has not been comprehensively studied to its potential.
no code implementations • 19 May 2018 • Hyoungwook Nam, Segwang Kim, Kyomin Jung
We define the complexity and difficulty of a number sequence prediction task with the structure of the smallest automaton that can generate the sequence.
no code implementations • SEMEVAL 2018 • Yanghoon Kim, Hwanhee Lee, Kyomin Jung
In this paper, we propose an attention-based classifier that predicts multiple emotions of a given sentence.
3 code implementations • NAACL 2018 • Seunghyun Yoon, Joongbo Shin, Kyomin Jung
In this paper, we propose a novel end-to-end neural architecture for ranking candidate answers, that adapts a hierarchical recurrent neural network and a latent topic clustering module.
Ranked #1 on Answer Selection on Ubuntu Dialogue (v1, Ranking)
no code implementations • 29 Sep 2017 • Seunghyun Yoon, Pablo Estrada, Kyomin Jung
We test our model in the Chinese and Sino-Korean vocabularies.
no code implementations • 13 Jan 2017 • Seunghyun Yoon, Hyeongu Yun, Yuna Kim, Gyu-tae Park, Kyomin Jung
In this paper, we propose an efficient transfer leaning methods for training a personalized language model using a recurrent neural network with long short-term memory architecture.
no code implementations • 24 Sep 2013 • Vincent Blondel, Kyomin Jung, Pushmeet Kohli, Devavrat Shah
This paper presents a novel meta algorithm, Partition-Merge (PM), which takes existing centralized algorithms for graph computation and makes them distributed and faster.
no code implementations • 30 Jul 2013 • Yongsub Lim, Kyomin Jung, Pushmeet Kohli
However, for many computer vision problems, the MAP solution under the model is not the ground truth solution.
no code implementations • 30 Jul 2013 • Yongsub Lim, Kyomin Jung, Pushmeet Kohli
We show how this constrained discrete optimization problem can be formulated as a multi-dimensional parametric mincut problem via its Lagrangian dual, and prove that our algorithm isolates all constraint instances for which the problem can be solved exactly.
no code implementations • NeurIPS 2009 • Kyomin Jung, Pushmeet Kohli, Devavrat Shah
We consider the question of computing Maximum A Posteriori (MAP) assignment in an arbitrary pair-wise Markov Random Field (MRF).
no code implementations • NeurIPS 2007 • Kyomin Jung, Devavrat Shah
We present a new local approximation algorithm for computing MAP and log-partition function for arbitrary exponential family distribution represented by a finite-valued pair-wise Markov random field (MRF), say G. Our algorithm is based on decomposing G into appropriately chosen small components; computing estimates locally in each of these components and then producing a good global solution.