1 code implementation • 30 Nov 2023 • Pilhyeon Lee, Hyeran Byun
However, they suffer from the issue of center misalignment raised by the inherent ambiguity of moment centers, leading to inaccurate predictions.
Ranked #1 on Natural Language Moment Retrieval on TACoS
no code implementations • ICCV 2023 • Seogkyu Jeon, Bei Liu, Pilhyeon Lee, Kibeom Hong, Jianlong Fu, Hyeran Byun
Due to the data absence, the textual description of the target domain and the vision-language models, e. g., CLIP, are utilized to effectively guide the generator.
1 code implementation • ICCV 2023 • Kibeom Hong, Seogkyu Jeon, Junsoo Lee, Namhyuk Ahn, Kunhee Kim, Pilhyeon Lee, Daesik Kim, Youngjung Uh, Hyeran Byun
To deliver the artistic expression of the target style, recent studies exploit the attention mechanism owing to its ability to map the local patches of the style image to the corresponding patches of the content image.
no code implementations • CVPR 2023 • Pilhyeon Lee, Taeoh Kim, Minho Shim, Dongyoon Wee, Hyeran Byun
Temporal action detection aims to predict the time intervals and the classes of action instances in the video.
1 code implementation • 20 Jan 2023 • Pilhyeon Lee, Seogkyu Jeon, Sunhee Hwang, Minjung Shin, Hyeran Byun
In this paper, we introduce a novel and practical problem setup, namely source-free subject adaptation, where the source subject data are unavailable and only the pre-trained model parameters are provided for subject adaptation.
no code implementations • 8 Aug 2022 • Sungpil Kho, Pilhyeon Lee, Wonyoung Lee, Minsong Ki, Hyeran Byun
To this end, previous methods adopt the common pipeline: they generate pseudo masks from class activation maps (CAMs) and use such masks to supervise segmentation networks.
no code implementations • 20 Jul 2022 • Mirae Do, Seogkyu Jeon, Pilhyeon Lee, Kibeom Hong, Yu-seung Ma, Hyeran Byun
Domain adaptation for object detection (DAOD) has recently drawn much attention owing to its capability of detecting target objects without any annotations.
1 code implementation • CVPR 2022 • Sungho Park, Jewook Lee, Pilhyeon Lee, Sunhee Hwang, Dohyung Kim, Hyeran Byun
Through extensive experiments on CelebA and UTK Face, we validate that the proposed method significantly outperforms SupCon and existing state-of-the-art methods in terms of the trade-off between top-1 accuracy and fairness.
1 code implementation • 7 Feb 2022 • Pilhyeon Lee, Sunhee Hwang, Jewook Lee, Minjung Shin, Seogkyu Jeon, Hyeran Byun
This paper tackles the problem of subject adaptive EEG-based visual recognition.
1 code implementation • 26 Oct 2021 • Pilhyeon Lee, Sunhee Hwang, Seogkyu Jeon, Hyeran Byun
It limits recognition systems to work only for the subjects involved in model training, which is undesirable for real-world scenarios where new subjects are frequently added.
1 code implementation • 19 Aug 2021 • Seogkyu Jeon, Kibeom Hong, Pilhyeon Lee, Jewook Lee, Hyeran Byun
To these ends, we propose a novel domain generalization framework where feature statistics are utilized for stylizing original features to ones with novel domain properties.
Ranked #34 on Domain Generalization on Office-Home
1 code implementation • ICCV 2021 • Pilhyeon Lee, Hyeran Byun
To learn completeness from the obtained sequence, we introduce two novel losses that contrast action instances with background ones in terms of action score and feature similarity, respectively.
Ranked #1 on Weakly Supervised Action Localization on THUMOS’14
Weakly Supervised Action Localization Weakly-supervised Temporal Action Localization +1
no code implementations • 26 Feb 2021 • Seogkyu Jeon, Pilhyeon Lee, Kibeom Hong, Hyeran Byun
Face aging is the task aiming to translate the faces in input images to designated ages.
2 code implementations • 12 Jun 2020 • Pilhyeon Lee, Jinglu Wang, Yan Lu, Hyeran Byun
Experimental results show that our uncertainty modeling is effective at alleviating the interference of background frames and brings a large performance gain without bells and whistles.
2 code implementations • 22 Nov 2019 • Pilhyeon Lee, Youngjung Uh, Hyeran Byun
This formulation does not fully model the problem in that background frames are forced to be misclassified as action classes to predict video-level labels accurately.
Ranked #9 on Weakly Supervised Action Localization on ActivityNet-1.2 (mAP@0.5 metric)
Weakly Supervised Action Localization Weakly-supervised Temporal Action Localization +1