Search Results for author: Hwanhee Lee

Found 16 papers, 7 papers with code

Learning to Select Question-Relevant Relations for Visual Question Answering

no code implementations NAACL (maiworkshop) 2021 Jaewoong Lee, Heejoon Lee, Hwanhee Lee, Kyomin Jung

Previous existing visual question answering (VQA) systems commonly use graph neural networks(GNNs) to extract visual relationships such as semantic relations or spatial relations.

Graph Attention Question Answering +1

Asking Clarification Questions to Handle Ambiguity in Open-Domain QA

no code implementations23 May 2023 Dongryeol Lee, Segwang Kim, Minwoo Lee, Hwanhee Lee, Joonsuk Park, Sang-Woo Lee, Kyomin Jung

We first present CAMBIGNQ, a dataset consisting of 5, 654 ambiguous questions, each with relevant passages, possible answers, and a clarification question.

Open-Domain Question Answering

Critic-Guided Decoding for Controlled Text Generation

no code implementations21 Dec 2022 Minbeom Kim, Hwanhee Lee, Kang Min Yoo, Joonsuk Park, Hwaran Lee, Kyomin Jung

In this work, we propose a novel critic decoding method for controlled language generation (CriticControl) that combines the strengths of reinforcement learning and weighted decoding.

Language Modelling reinforcement-learning +2

Task-specific Compression for Multi-task Language Models using Attribution-based Pruning

no code implementations9 May 2022 Nakyeong Yang, Yunah Jang, Hwanhee Lee, Seohyeong Jung, Kyomin Jung

However, these language models utilize an unnecessarily large number of model parameters, even when used only for a specific task.

Natural Language Understanding

UMIC: An Unreferenced Metric for Image Captioning via Contrastive Learning

1 code implementation ACL 2021 Hwanhee Lee, Seunghyun Yoon, Franck Dernoncourt, Trung Bui, Kyomin Jung

Also, we observe critical problems of the previous benchmark dataset (i. e., human annotations) on image captioning metric, and introduce a new collection of human annotations on the generated captions.

Contrastive Learning Image Captioning +1

DSTC8-AVSD: Multimodal Semantic Transformer Network with Retrieval Style Word Generator

no code implementations1 Apr 2020 Hwanhee Lee, Seunghyun Yoon, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Kyomin Jung

Audio Visual Scene-aware Dialog (AVSD) is the task of generating a response for a question with a given scene, video, audio, and the history of previous turns in the dialog.

Retrieval Word Embeddings

Attentive Modality Hopping Mechanism for Speech Emotion Recognition

1 code implementation29 Nov 2019 Seunghyun Yoon, Subhadeep Dey, Hwanhee Lee, Kyomin Jung

In this work, we explore the impact of visual modality in addition to speech and text for improving the accuracy of the emotion detection system.

Emotion Classification Multimodal Emotion Recognition +1

Improving Neural Question Generation using Answer Separation

no code implementations7 Sep 2018 Yanghoon Kim, Hwanhee Lee, Joongbo Shin, Kyomin Jung

Previous NQG models suffer from a problem that a significant proportion of the generated questions include words in the question target, resulting in the generation of unintended questions.

Question Generation Question-Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.