Search Results for author: Hyeongu Yun

Found 6 papers, 2 papers with code

Contrastive Learning for Context-aware Neural Machine Translation Using Coreference Information

no code implementations WMT (EMNLP) 2021 Yongkeun Hwang, Hyeongu Yun, Kyomin Jung

Context-aware neural machine translation (NMT) incorporates contextual information of surrounding texts, that can improve the translation quality of document-level machine translation.

Contrastive Learning coreference-resolution +6

Modality Alignment between Deep Representations for Effective Video-and-Language Learning

no code implementations LREC 2022 Hyeongu Yun, Yongil Kim, Kyomin Jung

Our method directly optimizes CKA to make an alignment between video and text embedding representations, hence it aids the cross-modality attention module to combine information over different modalities.

Question Answering Video Captioning +1

ListT5: Listwise Reranking with Fusion-in-Decoder Improves Zero-shot Retrieval

1 code implementation24 Feb 2024 Soyoung Yoon, Eunbi Choi, Jiyeon Kim, Yireun Kim, Hyeongu Yun, Seung-won Hwang

We propose ListT5, a novel reranking approach based on Fusion-in-Decoder (FiD) that handles multiple candidate passages at both train and inference time.

Retrieval

PR-MCS: Perturbation Robust Metric for MultiLingual Image Captioning

no code implementations15 Mar 2023 Yongil Kim, Yerin Hwang, Hyeongu Yun, Seunghyun Yoon, Trung Bui, Kyomin Jung

Vulnerability to lexical perturbation is a critical weakness of automatic evaluation metrics for image captioning.

Image Captioning

Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following

2 code implementations28 Feb 2023 Seonghyeon Ye, Hyeonbin Hwang, Sohee Yang, Hyeongu Yun, Yireun Kim, Minjoon Seo

In this paper, we present our finding that prepending a Task-Agnostic Prefix Prompt (TAPP) to the input improves the instruction-following ability of various Large Language Models (LLMs) during inference.

Instruction Following Zero-shot Generalization

Efficient Transfer Learning Schemes for Personalized Language Modeling using Recurrent Neural Network

no code implementations13 Jan 2017 Seunghyun Yoon, Hyeongu Yun, Yuna Kim, Gyu-tae Park, Kyomin Jung

In this paper, we propose an efficient transfer leaning methods for training a personalized language model using a recurrent neural network with long short-term memory architecture.

Language Modelling Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.