Search Results for author: Jaehyun Choi

Found 10 papers, 3 papers with code

Stereo-Matching Knowledge Distilled Monocular Depth Estimation Filtered by Multiple Disparity Consistency

no code implementations22 Jan 2024 Woonghyun Ka, Jae Young Lee, Jaehyun Choi, Junmo Kim

In stereo-matching knowledge distillation methods of the self-supervised monocular depth estimation, the stereo-matching network's knowledge is distilled into a monocular depth network through pseudo-depth maps.

Knowledge Distillation Monocular Depth Estimation +1

Modeling Stereo-Confidence Out of the End-to-End Stereo-Matching Network via Disparity Plane Sweep

no code implementations22 Jan 2024 Jae Young Lee, Woonghyun Ka, Jaehyun Choi, Junmo Kim

We propose a novel stereo-confidence that can be measured externally to various stereo-matching networks, offering an alternative input modality choice of the cost volume for learning-based approaches, especially in safety-critical systems.

Stereo Matching

Few-Shot Anomaly Detection with Adversarial Loss for Robust Feature Representations

no code implementations4 Dec 2023 Jae Young Lee, Wonjun Lee, Jaehyun Choi, Yongkwi Lee, Young Seog Yoon

Anomaly detection is a critical and challenging task that aims to identify data points deviating from normal patterns and distributions within a dataset.

Anomaly Detection Domain Adaptation

Expanding Expressiveness of Diffusion Models with Limited Data via Self-Distillation based Fine-Tuning

no code implementations2 Nov 2023 Jiwan Hur, Jaehyun Choi, Gyojin Han, Dong-Jae Lee, Junmo Kim

Training diffusion models on limited datasets poses challenges in terms of limited generation capacity and expressiveness, leading to unsatisfactory results in various downstream tasks utilizing pretrained diffusion models, such as domain translation and text-guided image manipulation.

Image Manipulation Transfer Learning

Deep Cross-Modal Steganography Using Neural Representations

no code implementations2 Jul 2023 Gyojin Han, Dong-Jae Lee, Jiwan Hur, Jaehyun Choi, Junmo Kim

The proposed framework employs INRs to represent the secret data, which can handle data of various modalities and resolutions.

Reinforcement Learning-Based Black-Box Model Inversion Attacks

1 code implementation CVPR 2023 Gyojin Han, Jaehyun Choi, Haeil Lee, Junmo Kim

Model inversion attacks are a type of privacy attack that reconstructs private data used to train a machine learning model, solely by accessing the model.

Privacy Preserving reinforcement-learning

Fix the Noise: Disentangling Source Feature for Controllable Domain Translation

1 code implementation CVPR 2023 Dongyeun Lee, Jae Young Lee, Doyeon Kim, Jaehyun Choi, Jaejun Yoo, Junmo Kim

This allows our method to smoothly control the degree to which it preserves source features while generating images from an entirely new domain using only a single model.

Transfer Learning Translation

I See-Through You: A Framework for Removing Foreground Occlusion in Both Sparse and Dense Light Field Images

no code implementations16 Jan 2023 Jiwan Hur, Jae Young Lee, Jaehyun Choi, Junmo Kim

To apply LF-DeOcc in both LF datasets, we propose a framework, ISTY, which is defined and divided into three roles: (1) extract LF features, (2) define the occlusion, and (3) inpaint occluded regions.

Data Poisoning Attack Aiming the Vulnerability of Continual Learning

no code implementations29 Nov 2022 Gyojin Han, Jaehyun Choi, Hyeong Gwon Hong, Junmo Kim

Training data generated by the proposed attack causes performance degradation on a specific task targeted by the attacker.

Adversarial Attack Continual Learning +1

Fix the Noise: Disentangling Source Feature for Transfer Learning of StyleGAN

1 code implementation29 Apr 2022 Dongyeun Lee, Jae Young Lee, Doyeon Kim, Jaehyun Choi, Junmo Kim

Owing to the disentangled feature space, our method can smoothly control the degree of the source features in a single model.

Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.