1 code implementation • EMNLP (Eval4NLP) 2020 • Hwanhee Lee, Seunghyun Yoon, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Kyomin Jung
In this paper, we propose an evaluation metric for image captioning systems using both image and text information.
no code implementations • LREC 2022 • Hyeongu Yun, Yongil Kim, Kyomin Jung
Our method directly optimizes CKA to make an alignment between video and text embedding representations, hence it aids the cross-modality attention module to combine information over different modalities.
no code implementations • dialdoc (ACL) 2022 • Yunah Jang, Dongryeol Lee, Hyung Joo Park, Taegwan Kang, Hwanhee Lee, Hyunkyung Bae, Kyomin Jung
In this paper, we mainly discuss about our submission to MultiDoc2Dial task, which aims to model the goal-oriented dialogues grounded in multiple documents.
no code implementations • NAACL (maiworkshop) 2021 • Jaewoong Lee, Heejoon Lee, Hwanhee Lee, Kyomin Jung
Previous existing visual question answering (VQA) systems commonly use graph neural networks(GNNs) to extract visual relationships such as semantic relations or spatial relations.
no code implementations • WMT (EMNLP) 2021 • Yongkeun Hwang, Hyeongu Yun, Kyomin Jung
Context-aware neural machine translation (NMT) incorporates contextual information of surrounding texts, that can improve the translation quality of document-level machine translation.
no code implementations • 23 May 2023 • Dongryeol Lee, Segwang Kim, Minwoo Lee, Hwanhee Lee, Joonsuk Park, Sang-Woo Lee, Kyomin Jung
We first present CAMBIGNQ, a dataset consisting of 5, 654 ambiguous questions, each with relevant passages, possible answers, and a clarification question.
no code implementations • 23 May 2023 • Minwoo Lee, Hyukhun Koh, Kang-il Lee, Dongdong Zhang, Minsung Kim, Kyomin Jung
Our method is target-language-agnostic and applicable to already trained multilingual machine translation models through post-fine-tuning.
no code implementations • 23 Mar 2023 • Hyukhun Koh, Haesung Pyun, Nakyeong Yang, Kyomin Jung
In Task Oriented Dialogue (TOD) system, detecting and inducing new intents are two main challenges to apply the system in the real world.
no code implementations • 15 Mar 2023 • Yongil Kim, Yerin Hwang, Hyeongu Yun, Seunghyun Yoon, Trung Bui, Kyomin Jung
Vulnerability to lexical perturbation is a critical weakness of automatic evaluation metrics for image captioning.
no code implementations • 27 Feb 2023 • Yoonhyung Lee, Jinhyeok Yang, Kyomin Jung
Also, the objective function of NF makes the model use the variance information and the text in a disentangled manner resulting in more precise variance control.
no code implementations • 21 Dec 2022 • Minbeom Kim, Hwanhee Lee, Kang Min Yoo, Joonsuk Park, Hwaran Lee, Kyomin Jung
In this work, we propose a novel critic decoding method for controlled language generation (CriticControl) that combines the strengths of reinforcement learning and weighted decoding.
no code implementations • 26 Jul 2022 • Yoonhyung Lee, Seunghyun Yoon, Kyomin Jung
Then, the attention weights of each modality are applied directly to the other modality in a crossed way, so that the CAN gathers the audio and text information from the same time steps based on each modality.
no code implementations • 9 May 2022 • Nakyeong Yang, Yunah Jang, Hwanhee Lee, Seohyeong Jung, Kyomin Jung
However, these language models utilize an unnecessarily large number of model parameters, even when used only for a specific task.
1 code implementation • Findings (NAACL) 2022 • Hwanhee Lee, Kang Min Yoo, Joonsuk Park, Hwaran Lee, Kyomin Jung
To this end, the latest approach is to train a factual consistency classifier on factually consistent and inconsistent summaries.
no code implementations • 18 Apr 2022 • Hwanhee Lee, Cheoneum Park, Seunghyun Yoon, Trung Bui, Franck Dernoncourt, Juae Kim, Kyomin Jung
In this paper, we propose an efficient factual error correction system RFEC based on entities retrieval post-editing process.
1 code implementation • 30 Sep 2021 • Minwoo Lee, Seungpil Won, Juae Kim, Hwanhee Lee, Cheoneum Park, Kyomin Jung
Specifically, we employ a two-stage augmentation pipeline to generate new claims and evidences from existing samples.
no code implementations • 29 Sep 2021 • Seongho Joo, Kyomin Jung
With the rapid advancement in deep generative models, recent neural text-to-speech models have succeeded in synthesizing human-like speech, even in an end-to-end manner.
no code implementations • 13 Sep 2021 • Yongkeun Hwang, Hyungu Yun, Kyomin Jung
We experimented with our method on common context-aware NMT models and two document-level translation tasks.
1 code implementation • Findings (EMNLP) 2021 • Hwanhee Lee, Thomas Scialom, Seunghyun Yoon, Franck Dernoncourt, Kyomin Jung
A Visual-QA system is necessary for QACE-Img.
no code implementations • SEMEVAL 2021 • Sangwon Yoon, Yanghoon Kim, Kyomin Jung
Source-free domain adaptation is an emerging line of work in deep learning research since it is closely related to the real-world environment.
1 code implementation • ACL 2021 • Hwanhee Lee, Seunghyun Yoon, Franck Dernoncourt, Trung Bui, Kyomin Jung
Also, we observe critical problems of the previous benchmark dataset (i. e., human annotations) on image captioning metric, and introduce a new collection of human annotations on the generated captions.
1 code implementation • 13 Jan 2021 • Segwang Kim, Hyoungwook Nam, Joonyoung Kim, Kyomin Jung
Logical reasoning tasks over symbols, such as learning arithmetic operations and computer program evaluations, have become challenges to deep learning.
1 code implementation • ICLR 2021 • Yoonhyung Lee, Joongbo Shin, Kyomin Jung
Although early text-to-speech (TTS) models such as Tacotron 2 have succeeded in generating human-like speech, their autoregressive (AR) architectures have a limitation that they require a lot of time to generate a mel-spectrogram consisting of hundreds of steps.
no code implementations • 16 Oct 2020 • Yanghoon Kim, Seungpil Won, Seunghyun Yoon, Kyomin Jung
Applying generative adversarial networks (GANs) to text-related tasks is challenging due to the discrete nature of language.
1 code implementation • NAACL 2021 • Hwanhee Lee, Seunghyun Yoon, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Joongbo Shin, Kyomin Jung
To evaluate our metric, we create high-quality human judgments of correctness on two GenQA datasets.
1 code implementation • ACL 2020 • Joongbo Shin, Yoonhyung Lee, Seunghyun Yoon, Kyomin Jung
Even though BERT achieves successful performance improvements in various supervised learning tasks, applying BERT for unsupervised tasks still holds a limitation that it requires repetitive inference for computing contextual language representations.
no code implementations • 1 Apr 2020 • Hwanhee Lee, Seunghyun Yoon, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Kyomin Jung
Audio Visual Scene-aware Dialog (AVSD) is the task of generating a response for a question with a given scene, video, audio, and the history of previous turns in the dialog.
no code implementations • 1 Apr 2020 • Heeyoung Kwak, Minwoo Lee, Seunghyun Yoon, Jooyoung Chang, Sangmin Park, Kyomin Jung
In this study, we develop a novel graph-based framework for ADR signal detection using healthcare claims data.
no code implementations • 23 Mar 2020 • Kunwoo Park, Taegyun Kim, Seunghyun Yoon, Meeyoung Cha, Kyomin Jung
In digital environments where substantial amounts of information are shared online, news headlines play essential roles in the selection and diffusion of news articles.
1 code implementation • 29 Nov 2019 • Seunghyun Yoon, Subhadeep Dey, Hwanhee Lee, Kyomin Jung
In this work, we explore the impact of visual modality in addition to speech and text for improving the accuracy of the emotion detection system.
1 code implementation • LREC 2020 • Seunghyun Yoon, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Kyomin Jung
In this study, we propose a novel graph neural network called propagate-selector (PS), which propagates information over sentences to understand information that cannot be inferred when considering sentences in isolation.
no code implementations • WS 2019 • Jiin Nam, Seunghyun Yoon, Kyomin Jung
While deep learning techniques have shown promising results in many natural language processing (NLP) tasks, it has not been widely applied to the clinical domain.
no code implementations • SEMEVAL 2019 • Yoonhyung Lee, Yanghoon Kim, Kyomin Jung
This paper describes our system for SemEval-2019 Task 3: EmoContext, which aims to predict the emotion of the third utterance considering two preceding utterances in a dialogue.
no code implementations • 30 May 2019 • Seunghyun Yoon, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Kyomin Jung
In this paper, we propose a novel method for a sentence-level answer-selection task that is a fundamental problem in natural language processing.
Ranked #7 on
Question Answering
on TrecQA
no code implementations • 16 May 2019 • Joongbo Shin, Yoonhyung Lee, Kyomin Jung
Recent studies have tried to use bidirectional LMs (biLMs) instead of conventional unidirectional LMs (uniLMs) for rescoring the $N$-best list decoded from the acoustic model.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+2
2 code implementations • 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2019 • Seunghyun Yoon, Seokhyun Byun, Subhadeep Dey, Kyomin Jung
As opposed to using knowledge from both the modalities separately, we propose a framework to exploit acoustic information in tandem with lexical data.
2 code implementations • 17 Nov 2018 • Seunghyun Yoon, Kunwoo Park, Joongbo Shin, Hongjun Lim, Seungpil Won, Meeyoung Cha, Kyomin Jung
Some news headlines mislead readers with overrated or false information, and identifying them in advance will better assist readers in choosing proper news stories to consume.
4 code implementations • 10 Oct 2018 • Seunghyun Yoon, Seokhyun Byun, Kyomin Jung
Speech emotion recognition is a challenging task, and extensive reliance has been placed on models that use audio features in building well-performing classifiers.
no code implementations • 7 Sep 2018 • Yanghoon Kim, Hwanhee Lee, Joongbo Shin, Kyomin Jung
Previous NQG models suffer from a problem that a significant proportion of the generated questions include words in the question target, resulting in the generation of unintended questions.
4 code implementations • WS 2018 • Younghun Lee, Seunghyun Yoon, Kyomin Jung
However, this dataset has not been comprehensively studied to its potential.
no code implementations • 19 May 2018 • Hyoungwook Nam, Segwang Kim, Kyomin Jung
We define the complexity and difficulty of a number sequence prediction task with the structure of the smallest automaton that can generate the sequence.
no code implementations • SEMEVAL 2018 • Yanghoon Kim, Hwanhee Lee, Kyomin Jung
In this paper, we propose an attention-based classifier that predicts multiple emotions of a given sentence.
3 code implementations • NAACL 2018 • Seunghyun Yoon, Joongbo Shin, Kyomin Jung
In this paper, we propose a novel end-to-end neural architecture for ranking candidate answers, that adapts a hierarchical recurrent neural network and a latent topic clustering module.
Ranked #1 on
Answer Selection
on Ubuntu Dialogue (v1, Ranking)
no code implementations • 29 Sep 2017 • Seunghyun Yoon, Pablo Estrada, Kyomin Jung
We test our model in the Chinese and Sino-Korean vocabularies.
no code implementations • 13 Jan 2017 • Seunghyun Yoon, Hyeongu Yun, Yuna Kim, Gyu-tae Park, Kyomin Jung
In this paper, we propose an efficient transfer leaning methods for training a personalized language model using a recurrent neural network with long short-term memory architecture.
no code implementations • 24 Sep 2013 • Vincent Blondel, Kyomin Jung, Pushmeet Kohli, Devavrat Shah
This paper presents a novel meta algorithm, Partition-Merge (PM), which takes existing centralized algorithms for graph computation and makes them distributed and faster.
no code implementations • 30 Jul 2013 • Yongsub Lim, Kyomin Jung, Pushmeet Kohli
However, for many computer vision problems, the MAP solution under the model is not the ground truth solution.
no code implementations • 30 Jul 2013 • Yongsub Lim, Kyomin Jung, Pushmeet Kohli
We show how this constrained discrete optimization problem can be formulated as a multi-dimensional parametric mincut problem via its Lagrangian dual, and prove that our algorithm isolates all constraint instances for which the problem can be solved exactly.
no code implementations • NeurIPS 2009 • Kyomin Jung, Pushmeet Kohli, Devavrat Shah
We consider the question of computing Maximum A Posteriori (MAP) assignment in an arbitrary pair-wise Markov Random Field (MRF).
no code implementations • NeurIPS 2007 • Kyomin Jung, Devavrat Shah
We present a new local approximation algorithm for computing MAP and log-partition function for arbitrary exponential family distribution represented by a finite-valued pair-wise Markov random field (MRF), say G. Our algorithm is based on decomposing G into appropriately chosen small components; computing estimates locally in each of these components and then producing a good global solution.