no code implementations • 19 Dec 2023 • HyeongGwon Hong, Yooshin Cho, Hanbyel Cho, Jaesung Ahn, Junmo Kim
Gradient norm, which is commonly used as a vulnerability proxy for gradient inversion attack, cannot explain this as it remains constant regardless of the loss function for gradient matching.
1 code implementation • 5 Aug 2023 • Hanbyel Cho, Junmo Kim
In contrast, we propose a generative approach framework, called "Diffusion-based Human Mesh Recovery (Diff-HMR)" that takes advantage of the denoising diffusion process to account for multiple plausible outcomes.
no code implementations • CVPR 2023 • Hanbyel Cho, Yooshin Cho, Jaesung Ahn, Junmo Kim
This is because we have a mental model that allows us to imagine a person's appearance at different viewing directions from a given image and utilize the consistency between them for inference.
Ranked #31 on 3D Human Pose Estimation on 3DPW
no code implementations • 3 May 2023 • Yooshin Cho, Hanbyel Cho, Hyeong Gwon Hong, Jaesung Ahn, Dongmin Cho, JungWoo Chang, Junmo Kim
In our method, standard spatial attention and networks focus on unmasked regions, and extract mask-invariant features while minimizing the loss of the conventional Face Recognition (FR) performance.
no code implementations • 27 Jul 2022 • Yooshin Cho, Youngsoo Kim, Hanbyel Cho, Jaesung Ahn, Hyeong Gwon Hong, Junmo Kim
Attention maps normalized with softmax operation highly rely upon magnitude of key vectors, and performance is degenerated if the magnitude information is removed.
no code implementations • 16 Jul 2022 • Hanbyel Cho, Yekang Lee, Jaemyung Yu, Junmo Kim
When a high-resolution (HR) image is degraded into a low-resolution (LR) image, the image loses some of the existing information.
1 code implementation • ICCV 2021 • Hanbyel Cho, Yooshin Cho, Jaemyung Yu, Junmo Kim
The proposed method is useful in practice because it does not require camera calibration and additional computations in a testing set-up.
Ranked #181 on 3D Human Pose Estimation on Human3.6M
1 code implementation • ICCV 2021 • Yooshin Cho, Hanbyel Cho, Youngsoo Kim, Junmo Kim
Batch Whitening is a technique that accelerates and stabilizes training by transforming input features to have a zero mean (Centering) and a unit variance (Scaling), and by removing linear correlation between channels (Decorrelation).