no code implementations • 15 Mar 2024 • Dongmin Park, Zhaofang Qian, Guangxing Han, Ser-Nam Lim
To precisely measure this, we first present an evaluation benchmark by extending popular multi-modal benchmark datasets with prepended hallucinatory dialogues generated by our novel Adversarial Question Generator, which can automatically generate image-related yet adversarial dialogues by adopting adversarial attacks on LMMs.
no code implementations • 14 Dec 2023 • Doyoung Kim, Dongmin Park, Yooju Shin, Jihwan Bang, Hwanjun Song, Jae-Gil Lee
We propose a novel framework DropTop that suppresses the shortcut bias in online continual learning (OCL) while being adaptive to the varying degree of the shortcut bias incurred by continuously changing environment.
no code implementations • 18 Nov 2023 • Doyoung Kim, Susik Yoon, Dongmin Park, YoungJun Lee, Hwanjun Song, Jihwan Bang, Jae-Gil Lee
We identify the inadequacy of universal and specific prompting in handling these dynamic shifts.
1 code implementation • 13 Oct 2022 • Dongmin Park, Yooju Shin, Jihwan Bang, YoungJun Lee, Hwanjun Song, Jae-Gil Lee
Unlabeled data examples awaiting annotations contain open-set noise inevitably.
1 code implementation • 19 Mar 2022 • Minseok Kim, Hwanjun Song, Yooju Shin, Dongmin Park, Kijung Shin, Jae-Gil Lee
It is featured with an adaptive learning rate for each parameter-interaction pair for inducing a recommender to quickly learn users' up-to-date interest.
1 code implementation • NeurIPS 2021 • Dongmin Park, Hwanjun Song, Minseok Kim, Jae-Gil Lee
A deep neural network (DNN) has achieved great success in many machine learning tasks by virtue of its high expressive power.
no code implementations • 8 Dec 2020 • Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, Jae-Gil Lee
In the seeding phase, the network is updated using all the samples to collect a seed of clean samples.
1 code implementation • 16 Jul 2020 • Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, Jae-Gil Lee
Deep learning has achieved remarkable success in numerous domains with help from large amounts of big data.
no code implementations • 19 Nov 2019 • Hwanjun Song, Minseok Kim, Dongmin Park, Jae-Gil Lee
In this paper, we claim that such overfitting can be avoided by "early stopping" training a deep neural network before the noisy labels are severely memorized.
no code implementations • 23 Oct 2019 • Dongmin Park, Susik Yoon, Hwanjun Song, Jae-Gil Lee
Learning a good distance measure for distance-based classification in time series leads to significant performance improvement in many tasks.
no code implementations • 25 Sep 2019 • Hwanjun Song, Minseok Kim, Dongmin Park, Jae-Gil Lee
In this paper, we claim that such overfitting can be avoided by "early stopping" training a deep neural network before the noisy labels are severely memorized.
no code implementations • ICCV 2019 • Dongmin Park, Seokil Hong, Bohyung Han, Kyoung Mu Lee
Catastrophic forgetting is a critical challenge in training deep neural networks.