1 code implementation • 1 Jan 2025 • Suho Park, SuBeen Lee, Hyun Seok Seong, Jaejoon Yoo, Jae-Pil Heo
Specifically, we construct support and query prototypes with SAM features and distinguish query prototypes of target regions based on ResNet features.
1 code implementation • 27 Dec 2024 • SuBeen Lee, Jiyeon Han, Soyeon Kim, Jaesik Choi
This study proposes a novel approach for generating diverse rare samples from high-resolution image datasets with pretrained GANs.
no code implementations • 19 Aug 2024 • Yerim Jeon, SuBeen Lee, JiHwan Kim, Jae-Pil Heo
Few-shot object counting has garnered significant attention for its practicality as it aims to count target objects in a query image based on given exemplars without the need for additional training.
1 code implementation • 17 Jul 2024 • Hyun Seok Seong, WonJun Moon, SuBeen Lee, Jae-Pil Heo
Then, considering the distribution of positive samples, we relocate the proxy anchor towards areas with a higher concentration of positives and adjust the positiveness boundary based on the propagation degree of the proxy anchor.
1 code implementation • 16 Jul 2024 • Gilhan Park, WonJun Moon, SuBeen Lee, Tae-Young Kim, Jae-Pil Heo
Additionally, in the case of the second approach, initializing the new class classifier with background knowledge triggers a similar background shift issue, but towards the new classes.
Ranked #1 on
Overlapped 5-3
on PASCAL VOC 2012
1 code implementation • 26 Dec 2023 • Suho Park, SuBeen Lee, Sangeek Hyun, Hyun Seok Seong, Jae-Pil Heo
Based on these two scores, we define a query background relevant score that captures the similarity between the backgrounds of the query and the support, and utilize it to scale support background features to adaptively restrict the impact of disruptive support backgrounds.
2 code implementations • 15 Nov 2023 • WonJun Moon, Sangeek Hyun, SuBeen Lee, Jae-Pil Heo
Dummy tokens conditioned by text query take portions of the attention weights, preventing irrelevant video clips from being represented by the text query.
Ranked #3 on
Highlight Detection
on TvSum
1 code implementation • 28 Jul 2023 • SuBeen Lee, WonJun Moon, Hyun Seok Seong, Jae-Pil Heo
While TDM influences high-level feature maps by task-adaptive calibration of channel-wise importance, we further introduce Instance Attention Module (IAM) operating in intermediate layers of feature extractors to instance-wisely highlight object-relevant channels, by extending QAM.
1 code implementation • CVPR 2023 • Hyun Seok Seong, WonJun Moon, SuBeen Lee, Jae-Pil Heo
Specifically, we add the loss propagating to local hidden positives, semantically similar nearby patches, in proportion to the predefined similarity scores.
Ranked #3 on
Unsupervised Semantic Segmentation
on Potsdam-3
1 code implementation • CVPR 2022 • SuBeen Lee, WonJun Moon, Jae-Pil Heo
Specifically, TDM learns task-specific channel weights based on two novel components: Support Attention Module (SAM) and Query Attention Module (QAM).
Ranked #11 on
Few-Shot Image Classification
on CUB 200 5-way 5-shot
(using extra training data)