Search Results for author: Hyeran Byun

Found 25 papers, 13 papers with code

BAM-DETR: Boundary-Aligned Moment Detection Transformer for Temporal Sentence Grounding in Videos

1 code implementation30 Nov 2023 Pilhyeon Lee, Hyeran Byun

However, they suffer from the issue of center misalignment raised by the inherent ambiguity of moment centers, leading to inaccurate predictions.

Moment Retrieval Natural Language Moment Retrieval +2

Improving Diversity in Zero-Shot GAN Adaptation with Semantic Variations

no code implementations ICCV 2023 Seogkyu Jeon, Bei Liu, Pilhyeon Lee, Kibeom Hong, Jianlong Fu, Hyeran Byun

Due to the data absence, the textual description of the target domain and the vision-language models, e. g., CLIP, are utilized to effectively guide the generator.

AesPA-Net: Aesthetic Pattern-Aware Style Transfer Networks

1 code implementation ICCV 2023 Kibeom Hong, Seogkyu Jeon, Junsoo Lee, Namhyuk Ahn, Kunhee Kim, Pilhyeon Lee, Daesik Kim, Youngjung Uh, Hyeran Byun

To deliver the artistic expression of the target style, recent studies exploit the attention mechanism owing to its ability to map the local patches of the style image to the corresponding patches of the content image.

Semantic correspondence Style Transfer

BallGAN: 3D-aware Image Synthesis with a Spherical Background

no code implementations ICCV 2023 Minjung Shin, Yunji Seo, Jeongmin Bae, Young Sun Choi, Hyunsu Kim, Hyeran Byun, Youngjung Uh

To solve this problem, we propose to approximate the background as a spherical surface and represent a scene as a union of the foreground placed in the sphere and the thin spherical background.

3D-Aware Image Synthesis

Source-free Subject Adaptation for EEG-based Visual Recognition

1 code implementation20 Jan 2023 Pilhyeon Lee, Seogkyu Jeon, Sunhee Hwang, Minjung Shin, Hyeran Byun

In this paper, we introduce a novel and practical problem setup, namely source-free subject adaptation, where the source subject data are unavailable and only the pre-trained model parameters are provided for subject adaptation.

EEG

Exploiting Shape Cues for Weakly Supervised Semantic Segmentation

no code implementations8 Aug 2022 Sungpil Kho, Pilhyeon Lee, Wonyoung Lee, Minsong Ki, Hyeran Byun

To this end, previous methods adopt the common pipeline: they generate pseudo masks from class activation maps (CAMs) and use such masks to supervise segmentation networks.

Segmentation Weakly supervised Semantic Segmentation +1

Exploiting Domain Transferability for Collaborative Inter-level Domain Adaptive Object Detection

no code implementations20 Jul 2022 Mirae Do, Seogkyu Jeon, Pilhyeon Lee, Kibeom Hong, Yu-seung Ma, Hyeran Byun

Domain adaptation for object detection (DAOD) has recently drawn much attention owing to its capability of detecting target objects without any annotations.

Domain Adaptation Object +3

Fair Contrastive Learning for Facial Attribute Classification

1 code implementation CVPR 2022 Sungho Park, Jewook Lee, Pilhyeon Lee, Sunhee Hwang, Dohyung Kim, Hyeran Byun

Through extensive experiments on CelebA and UTK Face, we validate that the proposed method significantly outperforms SupCon and existing state-of-the-art methods in terms of the trade-off between top-1 accuracy and fairness.

Attribute Classification +6

Subject Adaptive EEG-based Visual Recognition

1 code implementation26 Oct 2021 Pilhyeon Lee, Sunhee Hwang, Seogkyu Jeon, Hyeran Byun

It limits recognition systems to work only for the subjects involved in model training, which is undesirable for real-world scenarios where new subjects are frequently added.

EEG

Feature Stylization and Domain-aware Contrastive Learning for Domain Generalization

1 code implementation19 Aug 2021 Seogkyu Jeon, Kibeom Hong, Pilhyeon Lee, Jewook Lee, Hyeran Byun

To these ends, we propose a novel domain generalization framework where feature statistics are utilized for stylizing original features to ones with novel domain properties.

Contrastive Learning Domain Generalization

Learning Action Completeness from Points for Weakly-supervised Temporal Action Localization

1 code implementation ICCV 2021 Pilhyeon Lee, Hyeran Byun

To learn completeness from the obtained sequence, we introduce two novel losses that contrast action instances with background ones in terms of action score and feature similarity, respectively.

Weakly Supervised Action Localization Weakly-supervised Temporal Action Localization +1

Domain-Aware Universal Style Transfer

1 code implementation ICCV 2021 Kibeom Hong, Seogkyu Jeon, Huan Yang, Jianlong Fu, Hyeran Byun

To this end, we design a novel domainness indicator that captures the domainness value from the texture and structural features of reference images.

Style Transfer

Continuous Face Aging Generative Adversarial Networks

no code implementations26 Feb 2021 Seogkyu Jeon, Pilhyeon Lee, Kibeom Hong, Hyeran Byun

Face aging is the task aiming to translate the faces in input images to designated ages.

MORPH

ArrowGAN : Learning to Generate Videos by Learning Arrow of Time

no code implementations11 Jan 2021 Kibeom Hong, Youngjung Uh, Hyeran Byun

Training GANs on videos is even more sophisticated than on images because videos have a distinguished dimension: time.

Conditional Image Generation Video Generation

Contrastive Attention Maps for Self-Supervised Co-Localization

no code implementations ICCV 2021 Minsong Ki, Youngjung Uh, Junsuk Choe, Hyeran Byun

The goal of unsupervised co-localization is to locate the object in a scene under the assumptions that 1) the dataset consists of only one superclass, e. g., birds, and 2) there are no human-annotated labels in the dataset.

Representation Learning

FairFaceGAN: Fairness-aware Facial Image-to-Image Translation

no code implementations1 Dec 2020 Sunhee Hwang, Sungho Park, Dohyung Kim, Mirae Do, Hyeran Byun

Further, we also evaluate image translation performances, where FairFaceGAN shows competitive results, compared to those of existing methods.

Attribute Fairness +2

In-sample Contrastive Learning and Consistent Attention for Weakly Supervised Object Localization

1 code implementation25 Sep 2020 Minsong Ki, Youngjung Uh, Wonyoung Lee, Hyeran Byun

Furthermore, we propose foreground consistency loss that penalizes earlier layers producing noisy attention regarding the later layer as a reference to provide them with a sense of backgroundness.

Contrastive Learning Object +1

README: REpresentation learning by fairness-Aware Disentangling MEthod

no code implementations7 Jul 2020 Sungho Park, Dohyung Kim, Sunhee Hwang, Hyeran Byun

After the representation learning, this disentangled representation is leveraged for fairer downstream classification by excluding the subspace with the protected attribute information.

Attribute Fairness +1

Weakly-supervised Temporal Action Localization by Uncertainty Modeling

2 code implementations12 Jun 2020 Pilhyeon Lee, Jinglu Wang, Yan Lu, Hyeran Byun

Experimental results show that our uncertainty modeling is effective at alleviating the interference of background frames and brings a large performance gain without bells and whistles.

Action Classification Multiple Instance Learning +4

Learning Texture Invariant Representation for Domain Adaptation of Semantic Segmentation

1 code implementation CVPR 2020 Myeongjin Kim, Hyeran Byun

However, due to the domain gap between synthetic domain and real domain, it is challenging for a model trained with synthetic data to generalize to real data.

Domain Adaptation Segmentation +2

Background Suppression Network for Weakly-supervised Temporal Action Localization

2 code implementations22 Nov 2019 Pilhyeon Lee, Youngjung Uh, Hyeran Byun

This formulation does not fully model the problem in that background frames are forced to be misclassified as action classes to predict video-level labels accurately.

Weakly Supervised Action Localization Weakly-supervised Temporal Action Localization +1

Contextual Action Cues from Camera Sensor for Multi-Stream Action Recognition

no code implementations Sensors 2019, 19(6), 1382 2019 Jongkwang Hong, Bora Cho, Yong Won Hong, Hyeran Byun

However, depending on the action characteristics, contextual information, such as the existence of specific objects or globally-shared information in the image, becomes vital information to define the action.

Action Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.