Search Results for author: Hyeongjun Kwon

Found 5 papers, 3 papers with code

Improving Visual Recognition with Hyperbolical Visual Hierarchy Mapping

1 code implementation1 Apr 2024 Hyeongjun Kwon, Jinhyun Jang, Jin Kim, Kwonyoung Kim, Kwanghoon Sohn

Visual scenes are naturally organized in a hierarchy, where a coarse semantic is recursively comprised of several fine details.

Image Classification Scene Understanding

Layer-wise Auto-Weighting for Non-Stationary Test-Time Adaptation

1 code implementation10 Nov 2023 Junyoung Park, Jin Kim, Hyeongjun Kwon, Ilhoon Yoon, Kwanghoon Sohn

Given the inevitability of domain shifts during inference in real-world applications, test-time adaptation (TTA) is essential for model adaptation after deployment.

Test-time Adaptation

Knowing Where to Focus: Event-aware Transformer for Video Grounding

1 code implementation ICCV 2023 Jinhyun Jang, Jungin Park, Jin Kim, Hyeongjun Kwon, Kwanghoon Sohn

Recent DETR-based video grounding models have made the model directly predict moment timestamps without any hand-crafted components, such as a pre-defined proposal or non-maximum suppression, by learning moment queries.

Moment Queries Sentence +1

Probabilistic Prompt Learning for Dense Prediction

no code implementations CVPR 2023 Hyeongjun Kwon, Taeyong Song, Somi Jeong, Jin Kim, Jinhyun Jang, Kwanghoon Sohn

Recent progress in deterministic prompt learning has become a promising alternative to various downstream vision tasks, enabling models to learn powerful visual representations with the help of pre-trained vision-language models.

Attribute Text Matching

Dual Prototypical Contrastive Learning for Few-shot Semantic Segmentation

no code implementations9 Nov 2021 Hyeongjun Kwon, Somi Jeong, Sunok Kim, Kwanghoon Sohn

We address the problem of few-shot semantic segmentation (FSS), which aims to segment novel class objects in a target image with a few annotated samples.

Contrastive Learning Few-Shot Semantic Segmentation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.