no code implementations • 22 Jul 2024 • Eugene Valassakis, Guillermo Garcia-Hernando
Predicting camera-space hand meshes from single RGB images is crucial for enabling realistic hand interactions in 3D virtual and augmented worlds.
no code implementations • 26 Jun 2024 • Mohamed Sayed, Filippo Aleotti, Jamie Watson, Zawar Qureshi, Guillermo Garcia-Hernando, Gabriel Brostow, Sara Vicente, Michael Firman
Estimating depth from a sequence of posed RGB images is a fundamental computer vision task, with applications in augmented reality, path planning etc.
no code implementations • CVPR 2023 • Silvan Weder, Guillermo Garcia-Hernando, Aron Monszpart, Marc Pollefeys, Gabriel Brostow, Michael Firman, Sara Vicente
We validate our approach using a new and still-challenging dataset for the task of NeRF inpainting.
1 code implementation • 11 Oct 2022 • Eduardo Arnold, Jamie Wynn, Sara Vicente, Guillermo Garcia-Hernando, Áron Monszpart, Victor Adrian Prisacariu, Daniyar Turmukhambetov, Eric Brachmann
Can we relocalize in a scene represented by a single reference image?
1 code implementation • ECCV 2020 • Anita Rau, Guillermo Garcia-Hernando, Danail Stoyanov, Gabriel J. Brostow, Daniyar Turmukhambetov
Even when this is a known scene, the answer typically requires an expensive search across scale space, with matching and geometric verification of large sets of local features.
no code implementations • 7 Aug 2020 • Guillermo Garcia-Hernando, Edward Johns, Tae-Kyun Kim
Dexterous manipulation of objects in virtual environments with our bare hands, by using only a depth sensor and a state-of-the-art 3D hand pose estimator (HPE), is challenging.
no code implementations • ECCV 2020 • Anil Armagan, Guillermo Garcia-Hernando, Seungryul Baek, Shreyas Hampali, Mahdi Rad, Zhaohui Zhang, Shipeng Xie, Mingxiu Chen, Boshen Zhang, Fu Xiong, Yang Xiao, Zhiguo Cao, Junsong Yuan, Pengfei Ren, Weiting Huang, Haifeng Sun, Marek Hrúz, Jakub Kanis, Zdeněk Krňoul, Qingfu Wan, Shile Li, Linlin Yang, Dongheui Lee, Angela Yao, Weiguo Zhou, Sijia Mei, Yun-hui Liu, Adrian Spurr, Umar Iqbal, Pavlo Molchanov, Philippe Weinzaepfel, Romain Brégier, Grégory Rogez, Vincent Lepetit, Tae-Kyun Kim
To address these issues, we designed a public challenge (HANDS'19) to evaluate the abilities of current 3D hand pose estimators (HPEs) to interpolate and extrapolate the poses of a training set.
no code implementations • 27 Mar 2020 • Juil Sock, Guillermo Garcia-Hernando, Anil Armagan, Tae-Kyun Kim
Most successful approaches to estimate the 6D pose of an object typically train a neural network by supervising the learning with annotated poses in real world images.
no code implementations • 28 Jan 2020 • Caner Sahin, Guillermo Garcia-Hernando, Juil Sock, Tae-Kyun Kim
In this paper, we present the first comprehensive and most recent review of the methods on object pose recovery, from 3D bounding box detectors to full 6D pose estimators.
no code implementations • 19 Oct 2019 • Juil Sock, Guillermo Garcia-Hernando, Tae-Kyun Kim
In this work, we explore how a strategic selection of camera movements can facilitate the task of 6D multi-object pose estimation in cluttered scenarios while respecting real-world constraints important in robotics and augmented reality applications, such as time and distance traveled.
no code implementations • 11 Mar 2019 • Caner Sahin, Guillermo Garcia-Hernando, Juil Sock, Tae-Kyun Kim
6D object pose estimation is an important task that determines the 3D position and 3D rotation of an object in camera-centred coordinates.
no code implementations • 25 Oct 2018 • Iason Oikonomidis, Guillermo Garcia-Hernando, Angela Yao, Antonis Argyros, Vincent Lepetit, Tae-Kyun Kim
The fourth instantiation of this workshop attracted significant interest from both academia and the industry.
1 code implementation • 3 Oct 2018 • Dafni Antotsiou, Guillermo Garcia-Hernando, Tae-Kyun Kim
In this work, we capture the hand information by using a state-of-the-art hand pose estimator.
1 code implementation • CVPR 2018 • Shanxin Yuan, Guillermo Garcia-Hernando, Bjorn Stenger, Gyeongsik Moon, Ju Yong Chang, Kyoung Mu Lee, Pavlo Molchanov, Jan Kautz, Sina Honari, Liuhao Ge, Junsong Yuan, Xinghao Chen, Guijin Wang, Fan Yang, Kai Akiyama, Yang Wu, Qingfu Wan, Meysam Madadi, Sergio Escalera, Shile Li, Dongheui Lee, Iason Oikonomidis, Antonis Argyros, Tae-Kyun Kim
Official Torch7 implementation of "V2V-PoseNet: Voxel-to-Voxel Prediction Network for Accurate 3D Hand and Human Pose Estimation from a Single Depth Map", CVPR 2018
Ranked #5 on Hand Pose Estimation on HANDS 2017
no code implementations • 7 Jul 2017 • Shanxin Yuan, Qi Ye, Guillermo Garcia-Hernando, Tae-Kyun Kim
We present the 2017 Hands in the Million Challenge, a public competition designed for the evaluation of the task of 3D hand pose estimation.
1 code implementation • CVPR 2018 • Guillermo Garcia-Hernando, Shanxin Yuan, Seungryul Baek, Tae-Kyun Kim
Our dataset and experiments can be of interest to communities of 3D hand pose estimation, 6D object pose, and robotics as well as action recognition.
no code implementations • CVPR 2017 • Guillermo Garcia-Hernando, Tae-Kyun Kim
A human action can be seen as transitions between one's body poses over time, where the transition depicts a temporal relation between two poses.