no code implementations • NAACL (maiworkshop) 2021 • Jaewoong Lee, Heejoon Lee, Hwanhee Lee, Kyomin Jung
Previous existing visual question answering (VQA) systems commonly use graph neural networks(GNNs) to extract visual relationships such as semantic relations or spatial relations.
1 code implementation • EMNLP (Eval4NLP) 2020 • Hwanhee Lee, Seunghyun Yoon, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Kyomin Jung
In this paper, we propose an evaluation metric for image captioning systems using both image and text information.
no code implementations • dialdoc (ACL) 2022 • Yunah Jang, Dongryeol Lee, Hyung Joo Park, Taegwan Kang, Hwanhee Lee, Hyunkyung Bae, Kyomin Jung
In this paper, we mainly discuss about our submission to MultiDoc2Dial task, which aims to model the goal-oriented dialogues grounded in multiple documents.
no code implementations • 17 Oct 2024 • Ingeol Baek, Hwan Chang, Byeongjeong Kim, JiMin Lee, Hwanhee Lee
Retrieval-Augmented Generation (RAG) enhances language models by retrieving and incorporating relevant external knowledge.
no code implementations • 17 Jul 2024 • Jeonghyun Park, Hwanhee Lee
Conversational search seeks to retrieve relevant passages for the given questions in conversational question answering.
no code implementations • 17 Jul 2024 • Ingeol Baek, JiMin Lee, Joonho Yang, Hwanhee Lee
We demonstrate that our method is less dependent on the internal parameter knowledge of the model and generates queries with fewer factual inaccuracies.
no code implementations • 17 Jun 2024 • Yonghyun Jun, Hwanhee Lee
Aspect-based sentiment analysis (ABSA) assesses sentiments towards specific aspects within texts, resulting in detailed sentiment tuples.
Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA)
no code implementations • 7 Jun 2024 • Gyutae Park, Seojin Hwang, Hwanhee Lee
We provide a future work direction to explore more effective few-shot learning strategies and to investigate the transfer learning capabilities of LLMs for cross-lingual summarization.
no code implementations • 18 Apr 2024 • Minbeom Kim, Hwanhee Lee, Joonsuk Park, Hwaran Lee, Kyomin Jung
Therefore, we've completed a benchmark encompassing daily life questions, diverse corresponding responses, and majority vote ranking to train our helpfulness metric.
1 code implementation • 17 Apr 2024 • Joonho Yang, Seunghyun Yoon, Byeongjeong Kim, Hwanhee Lee
These atomic facts represent a more fine-grained unit of information, facilitating detailed understanding and interpretability of the summary's factual inconsistency.
1 code implementation • 22 Feb 2024 • Yumin Kim, Heejae Suh, Mingi Kim, Dongyeon Won, Hwanhee Lee
In this paper, we introduce a new dataset for the Korean dialogue sarcasm detection task, KoCoSa (Korean Context-aware Sarcasm Detection Dataset), which consists of 12. 8K daily Korean dialogues and the labels for this task on the last response.
no code implementations • 16 Nov 2023 • Minbeom Kim, Jahyun Koo, Hwanhee Lee, Joonsuk Park, Hwaran Lee, Kyomin Jung
As large language models become increasingly integrated into daily life, detecting implicit toxicity across diverse contexts is crucial.
1 code implementation • 16 Nov 2023 • Yunah Jang, Kang-il Lee, Hyunkyung Bae, Hwanhee Lee, Kyomin Jung
To address these challenges, we propose Iterative Conversational Query Reformulation (IterCQR), a methodology that conducts query reformulation without relying on human rewrites.
no code implementations • 9 Nov 2023 • Yerin Hwang, Yongil Kim, Hyunkyung Bae, Jeesoo Bang, Hwanhee Lee, Kyomin Jung
To address the data scarcity issue in Conversational question answering (ConvQA), a dialog inpainting method, which utilizes documents to generate ConvQA datasets, has been proposed.
1 code implementation • 23 May 2023 • Dongryeol Lee, Segwang Kim, Minwoo Lee, Hwanhee Lee, Joonsuk Park, Sang-Woo Lee, Kyomin Jung
We first present CAMBIGNQ, a dataset consisting of 5, 654 ambiguous questions, each with relevant passages, possible answers, and a clarification question.
no code implementations • 21 Dec 2022 • Minbeom Kim, Hwanhee Lee, Kang Min Yoo, Joonsuk Park, Hwaran Lee, Kyomin Jung
In this work, we propose a novel critic decoding method for controlled language generation (CriticControl) that combines the strengths of reinforcement learning and weighted decoding.
no code implementations • 9 May 2022 • Nakyeong Yang, Yunah Jang, Hwanhee Lee, Seohyeong Jung, Kyomin Jung
However, these language models utilize an unnecessarily large number of model parameters, even when used only for a specific task.
1 code implementation • Findings (NAACL) 2022 • Hwanhee Lee, Kang Min Yoo, Joonsuk Park, Hwaran Lee, Kyomin Jung
To this end, the latest approach is to train a factual consistency classifier on factually consistent and inconsistent summaries.
no code implementations • 18 Apr 2022 • Hwanhee Lee, Cheoneum Park, Seunghyun Yoon, Trung Bui, Franck Dernoncourt, Juae Kim, Kyomin Jung
In this paper, we propose an efficient factual error correction system RFEC based on entities retrieval post-editing process.
1 code implementation • 30 Sep 2021 • Minwoo Lee, Seungpil Won, Juae Kim, Hwanhee Lee, Cheoneum Park, Kyomin Jung
Specifically, we employ a two-stage augmentation pipeline to generate new claims and evidences from existing samples.
1 code implementation • Findings (EMNLP) 2021 • Hwanhee Lee, Thomas Scialom, Seunghyun Yoon, Franck Dernoncourt, Kyomin Jung
A Visual-QA system is necessary for QACE-Img.
1 code implementation • ACL 2021 • Hwanhee Lee, Seunghyun Yoon, Franck Dernoncourt, Trung Bui, Kyomin Jung
Also, we observe critical problems of the previous benchmark dataset (i. e., human annotations) on image captioning metric, and introduce a new collection of human annotations on the generated captions.
1 code implementation • NAACL 2021 • Hwanhee Lee, Seunghyun Yoon, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Joongbo Shin, Kyomin Jung
To evaluate our metric, we create high-quality human judgments of correctness on two GenQA datasets.
no code implementations • 1 Apr 2020 • Hwanhee Lee, Seunghyun Yoon, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Kyomin Jung
Audio Visual Scene-aware Dialog (AVSD) is the task of generating a response for a question with a given scene, video, audio, and the history of previous turns in the dialog.
1 code implementation • 29 Nov 2019 • Seunghyun Yoon, Subhadeep Dey, Hwanhee Lee, Kyomin Jung
In this work, we explore the impact of visual modality in addition to speech and text for improving the accuracy of the emotion detection system.
1 code implementation • 7 Sep 2018 • Yanghoon Kim, Hwanhee Lee, Joongbo Shin, Kyomin Jung
Previous NQG models suffer from a problem that a significant proportion of the generated questions include words in the question target, resulting in the generation of unintended questions.
no code implementations • SEMEVAL 2018 • Yanghoon Kim, Hwanhee Lee, Kyomin Jung
In this paper, we propose an attention-based classifier that predicts multiple emotions of a given sentence.