Search Results for author: SuBeen Lee

Found 10 papers, 9 papers with code

Foreground-Covering Prototype Generation and Matching for SAM-Aided Few-Shot Segmentation

1 code implementation1 Jan 2025 Suho Park, SuBeen Lee, Hyun Seok Seong, Jaejoon Yoo, Jae-Pil Heo

Specifically, we construct support and query prototypes with SAM features and distinguish query prototypes of target regions based on ResNet features.

Diverse Rare Sample Generation with Pretrained GANs

1 code implementation27 Dec 2024 SuBeen Lee, Jiyeon Han, Soyeon Kim, Jaesik Choi

This study proposes a novel approach for generating diverse rare samples from high-resolution image datasets with pretrained GANs.

Density Estimation Diversity

Mutually-Aware Feature Learning for Few-Shot Object Counting

no code implementations19 Aug 2024 Yerim Jeon, SuBeen Lee, JiHwan Kim, Jae-Pil Heo

Few-shot object counting has garnered significant attention for its practicality as it aims to count target objects in a query image based on given exemplars without the need for additional training.

Object Counting

Progressive Proxy Anchor Propagation for Unsupervised Semantic Segmentation

1 code implementation17 Jul 2024 Hyun Seok Seong, WonJun Moon, SuBeen Lee, Jae-Pil Heo

Then, considering the distribution of positive samples, we relocate the proxy anchor towards areas with a higher concentration of positives and adjust the positiveness boundary based on the propagation degree of the proxy anchor.

Contrastive Learning Segmentation +1

Mitigating Background Shift in Class-Incremental Semantic Segmentation

1 code implementation16 Jul 2024 Gilhan Park, WonJun Moon, SuBeen Lee, Tae-Young Kim, Jae-Pil Heo

Additionally, in the case of the second approach, initializing the new class classifier with background knowledge triggers a similar background shift issue, but towards the new classes.

Disjoint 15-1 Disjoint 15-5 +12

Task-Disruptive Background Suppression for Few-Shot Segmentation

1 code implementation26 Dec 2023 Suho Park, SuBeen Lee, Sangeek Hyun, Hyun Seok Seong, Jae-Pil Heo

Based on these two scores, we define a query background relevant score that captures the similarity between the backgrounds of the query and the support, and utilize it to scale support background features to adaptively restrict the impact of disruptive support backgrounds.

Correlation-Guided Query-Dependency Calibration for Video Temporal Grounding

2 code implementations15 Nov 2023 WonJun Moon, Sangeek Hyun, SuBeen Lee, Jae-Pil Heo

Dummy tokens conditioned by text query take portions of the attention weights, preventing irrelevant video clips from being represented by the text query.

Highlight Detection Moment Retrieval +3

Task-Oriented Channel Attention for Fine-Grained Few-Shot Classification

1 code implementation28 Jul 2023 SuBeen Lee, WonJun Moon, Hyun Seok Seong, Jae-Pil Heo

While TDM influences high-level feature maps by task-adaptive calibration of channel-wise importance, we further introduce Instance Attention Module (IAM) operating in intermediate layers of feature extractors to instance-wisely highlight object-relevant channels, by extending QAM.

Cross-Domain Few-Shot Fine-Grained Image Classification

Leveraging Hidden Positives for Unsupervised Semantic Segmentation

1 code implementation CVPR 2023 Hyun Seok Seong, WonJun Moon, SuBeen Lee, Jae-Pil Heo

Specifically, we add the loss propagating to local hidden positives, semantically similar nearby patches, in proportion to the predefined similarity scores.

Contrastive Learning Unsupervised Semantic Segmentation

Task Discrepancy Maximization for Fine-grained Few-Shot Classification

1 code implementation CVPR 2022 SuBeen Lee, WonJun Moon, Jae-Pil Heo

Specifically, TDM learns task-specific channel weights based on two novel components: Support Attention Module (SAM) and Query Attention Module (QAM).

Ranked #11 on Few-Shot Image Classification on CUB 200 5-way 5-shot (using extra training data)

Classification Few-Shot Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.