no code implementations • 14 Dec 2023 • Doyoung Kim, Dongmin Park, Yooju Shin, Jihwan Bang, Hwanjun Song, Jae-Gil Lee
We propose a novel framework DropTop that suppresses the shortcut bias in online continual learning (OCL) while being adaptive to the varying degree of the shortcut bias incurred by continuously changing environment.
no code implementations • 18 Nov 2023 • Jihwan Bang, Sumyeong Ahn, Jae-Gil Lee
In response to this inquiry, we observe that (1) simply applying a conventional active learning framework to pre-trained VLMs even may degrade performance compared to random selection because of the class imbalance in labeling candidates, and (2) the knowledge of VLMs can provide hints for achieving the balance before labeling.
no code implementations • 18 Nov 2023 • Doyoung Kim, Susik Yoon, Dongmin Park, YoungJun Lee, Hwanjun Song, Jihwan Bang, Jae-Gil Lee
We identify the inadequacy of universal and specific prompting in handling these dynamic shifts.
no code implementations • 25 Mar 2023 • Hwanjun Song, Jihwan Bang
Prompt-OVD is an efficient and effective framework for open-vocabulary object detection that utilizes class embeddings from CLIP as prompts, guiding the Transformer decoder to detect objects in both base and novel classes.
no code implementations • ICCV 2023 • Dahuin Jung, Dongyoon Han, Jihwan Bang, Hwanjun Song
However, we observe that the use of a prompt pool creates a domain scalability problem between pre-training and continual learning.
1 code implementation • 13 Oct 2022 • Dongmin Park, Yooju Shin, Jihwan Bang, YoungJun Lee, Hwanjun Song, Jae-Gil Lee
Unlabeled data examples awaiting annotations contain open-set noise inevitably.
2 code implementations • CVPR 2022 • Jihwan Bang, Hyunseo Koh, Seulki Park, Hwanjun Song, Jung-Woo Ha, Jonghyun Choi
A large body of continual learning (CL) methods, however, assumes data streams with clean labels, and online learning scenarios under noisy data streams are yet underexplored.
no code implementations • 29 Sep 2021 • Jihwan Bang, Hyunseo Koh, Seulki Park, Hwanjun Song, Jung-Woo Ha, Jonghyun Choi
Specifically, we argue the importance of both diversity and purity of examples in the episodic memory of continual learning models.
1 code implementation • CVPR 2021 • Jihwan Bang, Heesu Kim, Youngjoon Yoo, Jung-Woo Ha, Jonghyun Choi
Prevalent scenario of continual learning, however, assumes disjoint sets of classes as tasks and is less realistic rather artificial.
no code implementations • 19 Jun 2020 • Jihwan Bang, Heesu Kim, Youngjoon Yoo, Jung-Woo Ha
The cost of annotating transcriptions for large speech corpora becomes a bottleneck to maximally enjoy the potential capacity of deep neural network-based automatic speech recognition models.
8 code implementations • 20 Nov 2019 • Hyojin Park, Lars Lowe Sjösund, Youngjoon Yoo, Nicolas Monet, Jihwan Bang, Nojun Kwak
To solve the first problem, we introduce the new extremely lightweight portrait segmentation model SINet, containing an information blocking decoder and spatial squeeze modules.
Ranked #1 on Portrait Segmentation on EG1800
3 code implementations • 8 Aug 2019 • Hyojin Park, Lars Lowe Sjösund, Youngjoon Yoo, Jihwan Bang, Nojun Kwak
In our qualitative and quantitative analysis on the EG1800 dataset, we show that our method outperforms various existing lightweight segmentation models.