Search Results for author: Hee Suk Yoon

Found 10 papers, 4 papers with code

C-TPT: Calibrated Test-Time Prompt Tuning for Vision-Language Models via Text Feature Dispersion

no code implementations21 Mar 2024 Hee Suk Yoon, Eunseop Yoon, Joshua Tian Jin Tee, Mark Hasegawa-Johnson, Yingzhen Li, Chang D. Yoo

Through a series of observations, we find that the prompt choice significantly affects the calibration in CLIP, where the prompts leading to higher text feature dispersion result in better-calibrated predictions.

Test-time Adaptation

HEAR: Hearing Enhanced Audio Response for Video-grounded Dialogue

1 code implementation15 Dec 2023 Sunjae Yoon, Dahyun Kim, Eunseop Yoon, Hee Suk Yoon, Junyeong Kim, Chnag D. Yoo

Video-grounded Dialogue (VGD) aims to answer questions regarding a given multi-modal input comprising video, audio, and dialogue history.

SimPSI: A Simple Strategy to Preserve Spectral Information in Time Series Data Augmentation

1 code implementation10 Dec 2023 Hyun Ryu, Sunjae Yoon, Hee Suk Yoon, Eunseop Yoon, Chang D. Yoo

Our experimental results support that SimPSI considerably enhances the performance of time series data augmentations by preserving core spectral information.

Data Augmentation Time Series

INTapt: Information-Theoretic Adversarial Prompt Tuning for Enhanced Non-Native Speech Recognition

no code implementations25 May 2023 Eunseop Yoon, Hee Suk Yoon, John Harvill, Mark Hasegawa-Johnson, Chang D. Yoo

INTapt is trained simultaneously in the following two manners: (1) adversarial training to reduce accent feature dependence between the original input and the prompt-concatenated input and (2) training to minimize CTC loss for improving ASR performance to a prompt-concatenated input.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

ESD: Expected Squared Difference as a Tuning-Free Trainable Calibration Measure

1 code implementation4 Mar 2023 Hee Suk Yoon, Joshua Tian Jin Tee, Eunseop Yoon, Sunjae Yoon, Gwangsu Kim, Yingzhen Li, Chang D. Yoo

Studies have shown that modern neural networks tend to be poorly calibrated due to over-confident predictions.

SMSMix: Sense-Maintained Sentence Mixup for Word Sense Disambiguation

no code implementations14 Dec 2022 Hee Suk Yoon, Eunseop Yoon, John Harvill, Sunjae Yoon, Mark Hasegawa-Johnson, Chang D. Yoo

To the best of our knowledge, this is the first attempt to apply mixup in NLP while preserving the meaning of a specific word.

Data Augmentation Sentence +1

Information-Theoretic Text Hallucination Reduction for Video-grounded Dialogue

no code implementations12 Dec 2022 Sunjae Yoon, Eunseop Yoon, Hee Suk Yoon, Junyeong Kim, Chang D. Yoo

Despite the recent success of multi-modal reasoning to generate answer sentences, existing dialogue systems still suffer from a text hallucination problem, which denotes indiscriminate text-copying from input texts without an understanding of the question.

Hallucination Sentence

Selective Query-guided Debiasing for Video Corpus Moment Retrieval

1 code implementation17 Oct 2022 Sunjae Yoon, Ji Woo Hong, Eunseop Yoon, Dahyun Kim, Junyeong Kim, Hee Suk Yoon, Chang D. Yoo

Video moment retrieval (VMR) aims to localize target moments in untrimmed videos pertinent to a given textual query.

Moment Retrieval Retrieval +1

Cannot find the paper you are looking for? You can Submit a new open access paper.