no code implementations • ECCV 2020 • John Yang, Hyung Jin Chang, Seungeui Lee, Nojun Kwak
In this paper, we attempt to not only consider the appearance of a hand but incorporate the temporal movement information of a hand in motion into the learning framework for better 3D hand pose estimation performance, which leads to the necessity of a large scale dataset with sequential RGB hand images.
no code implementations • 11 Oct 2024 • Chen Xu, Qiming Huang, Yuqi Hou, Jiangxing Wu, Fan Zhang, Hyung Jin Chang, Jianbo Jiao
Medical image segmentation poses challenges due to domain gaps, data modality variations, and dependency on domain knowledge or experts, especially for low- and middle-income countries (LMICs).
no code implementations • 17 Jul 2024 • Zhongqun Zhang, Hengfei Wang, Ziwei Yu, Yihua Cheng, Angela Yao, Hyung Jin Chang
Given a language description of the hand and contact, NL2Contact generates realistic and faithful 3D hand-object contacts.
no code implementations • 26 Apr 2024 • Hengfei Wang, Zhongqun Zhang, Yihua Cheng, Hyung Jin Chang
Our work first introduces a text-of-gaze dataset containing over 90k text descriptions spanning a dense distribution of gaze and head poses.
2 code implementations • 25 Mar 2024 • Zicong Fan, Takehiko Ohkawa, Linlin Yang, Nie Lin, Zhishan Zhou, Shihao Zhou, Jiajun Liang, Zhong Gao, Xuanyang Zhang, Xue Zhang, Fei Li, Zheng Liu, Feng Lu, Karim Abou Zeid, Bastian Leibe, Jeongwan On, Seungryul Baek, Aditya Prakash, Saurabh Gupta, Kun He, Yoichi Sato, Otmar Hilliges, Hyung Jin Chang, Angela Yao
A holistic 3Dunderstanding of such interactions from egocentric views is important for tasks in robotics, AR/VR, action recognition and motion generation.
no code implementations • CVPR 2024 • Yihua Cheng, Yaning Zhu, Zongji Wang, Hongquan Hao, Yongwei Liu, Shiqing Cheng, Xi Wang, Hyung Jin Chang
GazeDPTR shows state-of-the-art performance on the IVGaze dataset.
1 code implementation • CVPR 2024 • Boeun Kim, Jungho Kim, Hyung Jin Chang, Jin Young Choi
While existing motion style transfer methods are effective between two motions with identical content, their performance significantly diminishes when transferring style between motions with different contents.
no code implementations • 8 Feb 2024 • Zhongqun Zhang, Jifei Song, Eduardo Pérez-Pellitero, Yiren Zhou, Hyung Jin Chang, Aleš Leonardis
Despite remarkable progress that has been achieved in this field, existing methods still fail to synthesize the hand-object interaction photo-realistically, suffering from degraded rendering quality caused by the heavy mutual occlusions between the hand and the object, and inaccurate hand-object pose estimation.
1 code implementation • 9 Nov 2023 • Yuqi Hou, Zhongqun Zhang, Nora Horanyi, Jaewon Moon, Yihua Cheng, Hyung Jin Chang
We then use the identity information to enhance scene images and propose a gaze candidate estimation network.
1 code implementation • 25 Sep 2023 • Uyoung Jeong, Seungryul Baek, Hyung Jin Chang, Kwang In Kim
Our new instance embedding loss provides a learning signal on the entire area of the image with bounding box annotations, achieving globally consistent and disentangled instance representation.
no code implementations • ICCV 2023 • Tze Ho Elden Tse, Franziska Mueller, Zhengyang Shen, Danhang Tang, Thabo Beeler, Mingsong Dou, yinda zhang, Sasa Petrovic, Hyung Jin Chang, Jonathan Taylor, Bardia Doosti
We propose a novel transformer-based framework that reconstructs two high fidelity hands from multi-view RGB images.
1 code implementation • 18 Aug 2023 • Yunhan Wang, Xiangwei Shi, Shalini De Mello, Hyung Jin Chang, Xucong Zhang
With the rapid development of deep learning technology in the past decade, appearance-based gaze estimation has attracted great attention from both computer vision and human-computer interaction research communities.
no code implementations • 1 Aug 2023 • Hengfei Wang, Zhongqun Zhang, Yihua Cheng, Hyung Jin Chang
In this paper, we aim to learn a face NeRF model that is sensitive to eye movements from multi-view images.
no code implementations • ICCV 2023 • Runyang Feng, Yixing Gao, Tze Ho Elden Tse, Xueqing Ma, Hyung Jin Chang
However, extending such models to multi-frame human pose estimation is non-trivial due to the presence of the additional temporal dimension in videos.
no code implementations • 5 May 2023 • Xingyu Zhu, Xin Wang, Jonathan Freer, Hyung Jin Chang, Yixing Gao
These methods often utilize physics engines to synthesize depth images to reduce the cost of real labeled data collection.
1 code implementation • CVPR 2023 • Linfang Zheng, Chen Wang, Yinghan Sun, Esha Dasgupta, Hua Chen, Ales Leonardis, Wei zhang, Hyung Jin Chang
In this paper, we focus on the problem of category-level object pose estimation, which is challenging due to the large intra-category shape variation.
no code implementations • CVPR 2023 • Runyang Feng, Yixing Gao, Xueqing Ma, Tze Ho Elden Tse, Hyung Jin Chang
On the other hand, the temporal difference has the ability to encode representative motion information which can potentially be valuable for pose estimation but has not been fully exploited.
no code implementations • 9 Dec 2022 • Wei Chen, Xi Jia, Zhongqun Zhang, Hyung Jin Chang, Linlin Shen, Jinming Duan, Ales Leonardis
The proposed rotation representation has two major advantages: 1) decoupled characteristic that makes the rotation estimation easier; 2) flexible length and rotated angle of the vectors allow us to find a more suitable vector representation for specific pose estimation task.
1 code implementation • CVPR 2023 • Alessandro Ruzzi, Xiangwei Shi, Xi Wang, Gengyan Li, Shalini De Mello, Hyung Jin Chang, Xucong Zhang, Otmar Hilliges
We propose GazeNeRF, a 3D-aware method for the task of gaze redirection.
no code implementations • 1 Aug 2022 • Tze Ho Elden Tse, Zhongqun Zhang, Kwang In Kim, Ales Leonardis, Feng Zheng, Hyung Jin Chang
In this paper, we propose a novel semi-supervised framework that allows us to learn contact from monocular images.
1 code implementation • 13 Jul 2022 • Boeun Kim, Hyung Jin Chang, Jungho Kim, Jin Young Choi
To tackle the learning of whole-body motion, long-range temporal dynamics, and person-to-person interactions, we design a global and local attention mechanism, where, global body motions and local joint motions pay attention to each other.
no code implementations • CVPR 2022 • Tze Ho Elden Tse, Kwang In Kim, Ales Leonardis, Hyung Jin Chang
Estimating the pose and shape of hands and objects under interaction finds numerous applications including augmented and virtual reality.
Ranked #6 on hand-object pose on DexYCB
no code implementations • 7 Jan 2022 • Nora Horanyi, Kedi Xia, Kwang Moo Yi, Abhishake Kumar Bojja, Ales Leonardis, Hyung Jin Chang
We propose a novel optimization framework that crops a given image based on user description and aesthetics.
1 code implementation • 26 Nov 2021 • Jaemin Na, Dongyoon Han, Hyung Jin Chang, Wonjun Hwang
In the contrastive space, inter-domain discrepancy is mitigated by constraining instances to have contrastive views and labels, and the consensus space reduces the confusion between intra-domain categories.
Ranked #1 on Unsupervised Domain Adaptation on PACS
no code implementations • 26 Sep 2021 • Alexander Thorley, Xi Jia, Hyung Jin Chang, Boyang Liu, Karina Bunting, Victoria Stoll, Antonio de Marvao, Declan P. O'Regan, Georgios Gkoutos, Dipak Kotecha, Jinming Duan
Recent developments in stochastic approaches based on deep learning have achieved sub-second runtimes for DiffIR with competitive registration accuracy, offering a fast alternative to conventional iterative methods.
no code implementations • 25 May 2021 • Xi Jia, Alexander Thorley, Wei Chen, Huaqi Qiu, Linlin Shen, Iain B Styles, Hyung Jin Chang, Ales Leonardis, Antonio de Marvao, Declan P. O'Regan, Daniel Rueckert, Jinming Duan
We then propose two neural layers (i. e. warping layer and intensity consistency layer) to model the analytical solution and a residual U-Net to formulate the denoising problem (i. e. generalized denoising layer).
1 code implementation • CVPR 2021 • Jiwoong Park, Junho Cho, Hyung Jin Chang, Jin Young Choi
Most of the existing literature regarding hyperbolic embedding concentrate upon supervised learning, whereas the use of unsupervised hyperbolic embedding is less well explored.
2 code implementations • CVPR 2021 • Wei Chen, Xi Jia, Hyung Jin Chang, Jinming Duan, Linlin Shen, Ales Leonardis
In this paper, we focus on category-level 6D pose and size estimation from monocular RGB-D image.
Ranked #7 on 6D Pose Estimation using RGBD on REAL275
1 code implementation • CVPR 2021 • Jaemin Na, Heechul Jung, Hyung Jin Chang, Wonjun Hwang
However, most of the studies were based on direct adaptation from the source domain to the target domain and have suffered from large domain discrepancies.
Ranked #8 on Domain Adaptation on Office-31
no code implementations • ECCV 2020 • Kwang In Kim, Christian Richardt, Hyung Jin Chang
Predictor combination aims to improve a (target) predictor of a learning task based on the (reference) predictors of potentially relevant tasks, without having access to the internals of individual predictors.
no code implementations • 10 Jul 2020 • John Yang, Hyung Jin Chang, Seungeui Lee, Nojun Kwak
In this paper, we attempt to not only consider the appearance of a hand but incorporate the temporal movement information of a hand in motion into the learning framework for better 3D hand pose estimation performance, which leads to the necessity of a large scale dataset with sequential RGB hand images.
1 code implementation • 18 Jun 2020 • Jongin Lim, Daeho Um, Hyung Jin Chang, Dae Ung Jo, Jin Young Choi
In contrast to the existing diffusion methods with a transition matrix determined solely by the graph structure, CAD considers both the node features and the graph structure with the design of our class-attentive transition matrix that utilizes a classifier.
no code implementations • 7 Jun 2020 • Haiyang Chen, Hyung Jin Chang, Andrew Howes
Recent work in the behavioural sciences has begun to overturn the long-held belief that human decision making is irrational, suboptimal and subject to biases.
no code implementations • CVPR 2021 • Jongwon Choi, Kwang Moo Yi, Ji-Hoon Kim, Jinho Choo, Byoungjip Kim, Jin-Yeop Chang, Youngjune Gwon, Hyung Jin Chang
We show that our method can be applied to classification tasks on multiple different datasets -- including one that is a real-world dataset with heavy data imbalance -- significantly outperforming the state of the art.
1 code implementation • CVPR 2020 • Wei Chen, Xi Jia, Hyung Jin Chang, Jinming Duan, Ales Leonardis
Third, via the predicted segmentation and translation, we transfer the fine object point cloud into a local canonical coordinate, in which we train a rotation localization network to estimate initial object rotation.
1 code implementation • 5 Mar 2020 • Hyeon Cho, Tae-hoon Kim, Hyung Jin Chang, Wonjun Hwang
We propose a self-supervised visual learning method by predicting the variable playback speeds of a video.
no code implementations • CVPR 2017 • YoungJoon Yoo, Sangdoo Yun, Hyung Jin Chang, Yiannis Demiris, Jin Young Choi
(iii) The proposed regression is embedded into a generative model, and the whole procedure is developed by the variational autoencoder framework.
1 code implementation • ICCV 2019 • Jiwoong Park, Minsik Lee, Hyung Jin Chang, Kyuewang Lee, Jin Young Choi
For the reconstruction of node features, the decoder is designed based on Laplacian sharpening as the counterpart of Laplacian smoothing of the encoder, which allows utilizing the graph structure in the whole processes of the proposed autoencoder architecture.
no code implementations • CVPR 2019 • Kwang In Kim, Hyung Jin Chang
We present a new predictor combination algorithm that improves a given task predictor based on potentially relevant reference predictors.
1 code implementation • ECCV 2018 • Tobias Fischer, Hyung Jin Chang, Yiannis Demiris
We first record a novel dataset of varied gaze and head pose images in a natural environment, addressing the issue of ground truth annotation by measuring head pose using a motion capture system and eye gaze using mobile eyetracking glasses.
Ranked #1 on Gaze Estimation on RT-GENE
1 code implementation • CVPR 2018 • Jongwon Choi, Hyung Jin Chang, Tobias Fischer, Sangdoo Yun, Kyuewang Lee, Jiyeoup Jeong, Yiannis Demiris, Jin Young Choi
We propose a new context-aware correlation filter based tracking framework to achieve both high computational speed and state-of-the-art performance among real-time trackers.
Ranked #15 on Visual Object Tracking on VOT2017/18
1 code implementation • IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017 • Jongwon Choi, Hyung Jin Chang, Sangdoo Yun, Tobias Fischer, Yiannis Demiris, Jin Young Choi
We propose a new tracking framework with an attentional mechanism that chooses a subset of the associated correlation filters for increased robustness and computational efficiency.
1 code implementation • 12 Jun 2017 • Clément Moulin-Frier, Tobias Fischer, Maxime Petit, Grégoire Pointeau, Jordi-Ysard Puigbo, Ugo Pattacini, Sock Ching Low, Daniel Camilleri, Phuong Nguyen, Matej Hoffmann, Hyung Jin Chang, Martina Zambelli, Anne-Laure Mealier, Andreas Damianou, Giorgio Metta, Tony J. Prescott, Yiannis Demiris, Peter Ford Dominey, Paul F. M. J. Verschure
This paper introduces a cognitive architecture for a humanoid robot to engage in a proactive, mixed-initiative exploration and manipulation of its environment, where the initiative can originate from both the human and the robot.
1 code implementation • 12 Apr 2017 • Ruohan Wang, Antoine Cully, Hyung Jin Chang, Yiannis Demiris
We propose the Margin Adaptation for Generative Adversarial Networks (MAGANs) algorithm, a novel training procedure for GANs to improve stability and performance by using an adaptive hinge loss function.
no code implementations • CVPR 2016 • Jongwon Choi, Hyung Jin Chang, Jiyeoup Jeong, Yiannis Demiris, Jin Young Choi
In this paper, we present a novel attention-modulated visual tracking algorithm that decomposes an object into multiple cognitive units, and trains multiple elementary trackers in order to modulate the distribution of attention according to various feature and kernel types.
no code implementations • CVPR 2016 • Hyung Jin Chang, Tobias Fischer, Maxime Petit, Martina Zambelli, Yiannis Demiris
In this paper, we present a novel framework for finding the kinematic structure correspondence between two objects in videos via hypergraph matching.
no code implementations • CVPR 2015 • Hyung Jin Chang, Yiannis Demiris
The iterative merge process is guided by a skeleton distance function which is generated from a novel object boundary generation method from sparse points.
no code implementations • CVPR 2014 • Danhang Tang, Hyung Jin Chang, Alykhan Tejani, Tae-Kyun Kim
In contrast to prior forest-based methods, which take dense pixels as input, classify them independently and then estimate joint positions afterwards; our method can be considered as a structured coarse-to-fine search, starting from the centre of mass of a point cloud until locating all the skeletal joints.