no code implementations • 19 Dec 2023 • HyeongGwon Hong, Yooshin Cho, Hanbyel Cho, Jaesung Ahn, Junmo Kim
Gradient norm, which is commonly used as a vulnerability proxy for gradient inversion attack, cannot explain this as it remains constant regardless of the loss function for gradient matching.
no code implementations • CVPR 2023 • Hanbyel Cho, Yooshin Cho, Jaesung Ahn, Junmo Kim
This is because we have a mental model that allows us to imagine a person's appearance at different viewing directions from a given image and utilize the consistency between them for inference.
Ranked #31 on 3D Human Pose Estimation on 3DPW
no code implementations • 3 May 2023 • Yooshin Cho, Hanbyel Cho, Hyeong Gwon Hong, Jaesung Ahn, Dongmin Cho, JungWoo Chang, Junmo Kim
In our method, standard spatial attention and networks focus on unmasked regions, and extract mask-invariant features while minimizing the loss of the conventional Face Recognition (FR) performance.
no code implementations • 27 Jul 2022 • Yooshin Cho, Youngsoo Kim, Hanbyel Cho, Jaesung Ahn, Hyeong Gwon Hong, Junmo Kim
Attention maps normalized with softmax operation highly rely upon magnitude of key vectors, and performance is degenerated if the magnitude information is removed.
1 code implementation • IEEE Access 2022 • Youngsoo Kim, JEONGHYO HA, Yooshin Cho, Junmo Kim
Blind super-resolution (blind-SR) is an important task in the field of computer vision and has various applications in real-world.
Ranked #4 on Blind Super-Resolution on DIV2KRK - 2x upscaling
1 code implementation • ICCV 2021 • Hanbyel Cho, Yooshin Cho, Jaemyung Yu, Junmo Kim
The proposed method is useful in practice because it does not require camera calibration and additional computations in a testing set-up.
Ranked #181 on 3D Human Pose Estimation on Human3.6M
1 code implementation • ICCV 2021 • Yooshin Cho, Hanbyel Cho, Youngsoo Kim, Junmo Kim
Batch Whitening is a technique that accelerates and stabilizes training by transforming input features to have a zero mean (Centering) and a unit variance (Scaling), and by removing linear correlation between channels (Decorrelation).