no code implementations • ICML 2020 • Seong-Jin Park, Seungju Han, Ji-won Baek, Insoo Kim, Juhwan Song, Hae Beom Lee, Jae-Joon Han, Sung Ju Hwang
Humans have the ability to robustly recognize objects with various factors of variations such as nonrigid transformation, background noise, and change in lighting conditions.
no code implementations • CVPR 2022 • Hui Li, Zidong Guo, Seon-Min Rhee, Seungju Han, Jae-Joon Han
We formulate facial landmark detection as a coordinate regression task such that the model can be trained end-to-end.
Ranked #2 on Face Alignment on COFW
1 code implementation • CVPR 2022 • Caiyuan Zheng, Hui Li, Seon-Min Rhee, Seungju Han, Jae-Joon Han, Peng Wang
A robust consistency regularization based semi-supervised framework is proposed for STR, which can effectively solve the instability issue due to domain inconsistency between synthetic and real images.
no code implementations • CVPR 2022 • Minsu Ko, Eunju Cha, Sungjoo Suh, Huijin Lee, Jae-Joon Han, Jinwoo Shin, Bohyung Han
Unsupervised image-to-image translation has gained considerable attention due to the recent impressive progress based on generative adversarial networks (GANs).
no code implementations • CVPR 2022 • Yi Zhou, HUI ZHANG, Hana Lee, Shuyang Sun, Pingjun Li, Yangguang Zhu, ByungIn Yoo, Xiaojuan Qi, Jae-Joon Han
We encode all panoptic entities in a video, including both foreground instances and background semantics, with a unified representation called panoptic slots.
1 code implementation • CVPR 2021 • Insoo Kim, Seungju Han, Ji-won Baek, Seong-Jin Park, Jae-Joon Han, Jinwoo Shin
Our two-stage scheme allows the network to produce clean-like and robust features from any quality images, by reconstructing their clean images via the invertible decoder.
Ranked #17 on Domain Generalization on ImageNet-C
no code implementations • CVPR 2021 • Kinam Kwon, Eunhee Kang, Sangwon Lee, Su-Jin Lee, Hyong-Euk Lee, ByungIn Yoo, Jae-Joon Han
However, this causes inevitable image degradation in the form of spatially variant blur and noise because of the opaque display in front of the camera.
no code implementations • CVPR 2021 • Jaehyoung Yoo, Dongwook Lee, Changyong Son, Sangil Jung, ByungIn Yoo, Changkyu Choi, Jae-Joon Han, Bohyung Han
RaScaNet reads only a few rows of pixels at a time using a convolutional neural network and then sequentially learns the representation of the whole image using a recurrent neural network.
no code implementations • 13 Feb 2021 • Wissam J. Baddar, Seungju Han, Seonmin Rhee, Jae-Joon Han
In this paper, we propose self-reorganizing and rejuvenating convolutional neural networks; a biologically inspired method for improving the computational resource utilization of neural networks.
1 code implementation • ICCV 2021 • Dongyoung Kim, Jinwoo Kim, Seonghyeon Nam, Dongwoo Lee, Yeonkyung Lee, Nahyup Kang, Hyong-Euk Lee, ByungIn Yoo, Jae-Joon Han, Seon Joo Kim
Images in our dataset are mostly captured with illuminants existing in the scene, and the ground truth illumination is computed by taking the difference between the images with different illumination combination.
no code implementations • Asian Conference on Computer Vision (ACCV) 2020 • Insoo Kim, Seungju Han, Seong-Jin Park, Ji-won Baek, Jinwoo Shin, Jae-Joon Han, Changkyu Choi
Softmax-based learning methods have shown state-of-the-art performances on large-scale face recognition tasks.
Ranked #1 on Face Verification on CALFW
no code implementations • 16 Oct 2019 • Tianchu Guo, Yongchao Liu, HUI ZHANG, Xiabing Liu, Youngjun Kwak, Byung In Yoo, Jae-Joon Han, Changkyu Choi
For the second issue, we define a new metric to measure the robustness of gaze estimator, and propose an adversarial training based Disturbance with Ordinal loss (DwO) method to improve it.
no code implementations • 25 Sep 2019 • Youngsung Kim, Jae-Joon Han
To generate evenly distributed parameters, we constrain them to lie on \emph{hierarchical hyperspheres}.
no code implementations • 27 Sep 2018 • Dongha Kim, Yongchan Choi, Jae-Joon Han, Changkyu Choi, Yongdai Kim
The proposed method generates bad samples of high-quality by use of the adversarial training used in VAT.
no code implementations • CVPR 2019 • Sangil Jung, Changyong Son, Seohyung Lee, Jinwoo Son, Youngjun Kwak, Jae-Joon Han, Sung Ju Hwang, Changkyu Choi
We demonstrate the effectiveness of our trainable quantizer on ImageNet dataset with various network architectures such as ResNet-18, -34 and AlexNet, on which it outperforms existing methods to achieve the state-of-the-art accuracy.