no code implementations • 30 Mar 2023 • Yiming Zhao, Denys Rozumnyi, Jie Song, Otmar Hilliges, Marc Pollefeys, Martin R. Oswald
The key idea is to tackle the inverse problem of image deblurring by modeling the forward problem with a 3D human model, a texture map, and a sequence of poses to describe human motion.
no code implementations • 30 Mar 2023 • Sammy Christen, Wei Yang, Claudia Pérez-D'Arpino, Otmar Hilliges, Dieter Fox, Yu-Wei Chao
We propose the first framework to learn control policies for vision-based human-to-robot handovers, a critical task for human-robot interaction.
no code implementations • 27 Mar 2023 • Yifei Yin, Chen Guo, Manuel Kaufmann, Juan Jose Zarate, Jie Song, Otmar Hilliges
We propose Hi4D, a method and dataset for the automatic analysis of physically close human-human interaction under prolonged contact.
no code implementations • 16 Mar 2023 • Núria Armengol Urpí, Marco Bagatella, Otmar Hilliges, Georg Martius, Stelian Coros
Real-world robotic manipulation tasks remain an elusive challenge, since they involve both fine-grained environment interaction, as well as the ability to plan for long-horizon goals.
1 code implementation • 8 Mar 2023 • Kaiyue Shen, Chen Guo, Manuel Kaufmann, Juan Jose Zarate, Julien Valentin, Jie Song, Otmar Hilliges
Our method models bodies, hands, facial expressions and appearance in a holistic fashion and can be learned from either full 3D scans or RGB-D data.
no code implementations • 22 Feb 2023 • Chen Guo, Tianjian Jiang, Xu Chen, Jie Song, Otmar Hilliges
Specifically, we define a temporally consistent human representation in canonical space and formulate a global optimization over the background model, the canonical human shape and texture, and per-frame human pose parameters.
no code implementations • 22 Jan 2023 • Razvan-George Pasca, Alexey Gavryushin, Yen-Ling Kuo, Luc van Gool, Otmar Hilliges, Xi Wang
This action context together with the next video frame is processed by the multimodal fusion module to forecast the next object interaction.
no code implementations • 20 Dec 2022 • Tianjian Jiang, Xu Chen, Jie Song, Otmar Hilliges
To achieve this efficiency we propose a carefully designed and engineered system, that leverages emerging acceleration structures for neural fields, in combination with an efficient empty space-skipping strategy for dynamic scenes.
no code implementations • 19 Dec 2022 • Korrawe Karunratanakul, Sergey Prokudin, Otmar Hilliges, Siyu Tang
We present HARP (HAnd Reconstruction and Personalization), a personalized hand avatar creation approach that takes a short monocular RGB video of a human hand as input and reconstructs a faithful hand avatar exhibiting a high-fidelity appearance and geometry.
1 code implementation • 16 Dec 2022 • Yufeng Zheng, Wang Yifan, Gordon Wetzstein, Michael J. Black, Otmar Hilliges
The ability to create realistic, animatable and relightable head avatars from casual video sequences would open up wide ranging applications in communication and entertainment.
no code implementations • 14 Dec 2022 • Artur Grigorev, Bernhard Thomaszewski, Michael J. Black, Otmar Hilliges
We propose a method that leverages graph neural networks, multi-level message passing, and unsupervised training to enable real-time prediction of realistic clothing dynamics.
1 code implementation • 8 Dec 2022 • Alessandro Ruzzi, Xiangwei Shi, Xi Wang, Gengyan Li, Shalini De Mello, Hyung Jin Chang, Xucong Zhang, Otmar Hilliges
We propose GazeNeRF, a 3D-aware method for the task of gaze redirection.
no code implementations • 29 Nov 2022 • Malte Prinzler, Otmar Hilliges, Justus Thies
We present Depth-aware Image-based NEural Radiance fields (DINER).
1 code implementation • 28 Nov 2022 • Xu Chen, Tianjian Jiang, Jie Song, Max Rietmann, Andreas Geiger, Michael J. Black, Otmar Hilliges
A key challenge in making such methods applicable to articulated objects, such as the human body, is to model the deformation of 3D locations between the rest pose (a canonical space) and the deformed space.
no code implementations • 14 Nov 2022 • Mengfan Wu, Thomas Langerak, Otmar Hilliges, Juan Zarate
We propose to overcome these limitations by using neural networks to infer the marker's position and orientation.
no code implementations • 6 Sep 2022 • Xi Wang, Gen Li, Yen-Ling Kuo, Muhammed Kocabas, Emre Aksan, Otmar Hilliges
We further qualitatively evaluate the effectiveness of our method on real images and demonstrate its generalizability towards interaction types and object categories.
no code implementations • 1 Sep 2022 • Andrea Ziani, Zicong Fan, Muhammed Kocabas, Sammy Christen, Otmar Hilliges
We introduce TempCLR, a new time-coherent contrastive learning approach for the structured regression task of 3D hand reconstruction.
no code implementations • 16 Jun 2022 • Gengyan Li, Abhimitra Meka, Franziska Müller, Marcel C. Bühler, Otmar Hilliges, Thabo Beeler
The challenge of synthesizing eyes is multifold as it requires 1) appropriate representations for the various components of the eye and the periocular region for coherent viewpoint synthesis, capable of representing diffuse, refractive and highly reflective surfaces, 2) disentangling skin and eye appearance from environmental illumination such that it may be rendered under novel lighting conditions, and 3) capturing eyeball motion and the deformation of the surrounding skin to enable re-gazing.
no code implementations • 26 May 2022 • Marco Bagatella, Sammy Christen, Otmar Hilliges
Several methods, such as behavioral priors, are able to leverage offline data in order to efficiently accelerate reinforcement learning on complex tasks.
no code implementations • 28 Apr 2022 • Zicong Fan, Omid Taheri, Dimitrios Tzionas, Muhammed Kocabas, Manuel Kaufmann, Michael J. Black, Otmar Hilliges
Consequently, we introduce ARCTIC - the first dataset of free-form interactions of hands and articulated objects.
no code implementations • 15 Mar 2022 • Emre Aksan, Shugao Ma, Akin Caliskan, Stanislav Pidhorskyi, Alexander Richard, Shih-En Wei, Jason Saragih, Otmar Hilliges
To mitigate this asymmetry, we introduce a prior model that is conditioned on the runtime inputs and tie this prior space to the 3D face model via a normalizing flow in the latent space.
no code implementations • CVPR 2022 • Zijian Dong, Chen Guo, Jie Song, Xu Chen, Andreas Geiger, Otmar Hilliges
We present a novel method to learn Personalized Implicit Neural Avatars (PINA) from a short RGB-D sequence.
no code implementations • CVPR 2022 • Xu Chen, Tianjian Jiang, Jie Song, Jinlong Yang, Michael J. Black, Andreas Geiger, Otmar Hilliges
Furthermore, we show that our method can be used on the task of fitting human models to raw scans, outperforming the previous state-of-the-art.
no code implementations • 19 Dec 2021 • Yan Wu, Jiahao Wang, Yan Zhang, Siwei Zhang, Otmar Hilliges, Fisher Yu, Siyu Tang
Given an initial pose and the generated whole-body grasping pose as the start and end of the motion respectively, we design a novel contact-aware generative motion infilling module to generate a diverse set of grasp-oriented motions.
1 code implementation • CVPR 2022 • Yufeng Zheng, Victoria Fernández Abrevaya, Marcel C. Bühler, Xu Chen, Michael J. Black, Otmar Hilliges
Traditional 3D morphable face models (3DMMs) provide fine-grained control over expression but cannot easily capture geometric and appearance details.
1 code implementation • CVPR 2022 • Sammy Christen, Muhammed Kocabas, Emre Aksan, Jemin Hwangbo, Jie Song, Otmar Hilliges
We introduce the dynamic grasp synthesis task: given an object with a known 6D pose and a grasp reference, our goal is to generate motions that move the object to a target 6D pose.
1 code implementation • 29 Nov 2021 • Chen Guo, Xu Chen, Jie Song, Otmar Hilliges
In this work, we propose a method capable of capturing the dynamic 3D human shape from a monocular video featuring challenging body poses, without any additional input.
no code implementations • 1 Nov 2021 • Hsuan-I Ho, Xu Chen, Jie Song, Otmar Hilliges
We propose to address these issues in a motion-guided frame-upsampling framework that is capable of producing realistic human motion and appearance.
no code implementations • ICCV 2021 • Zijian Dong, Jie Song, Xu Chen, Chen Guo, Otmar Hilliges
In this paper we contribute a simple yet effective approach for estimating 3D poses of multiple people from multi-view images.
Ranked #16 on
3D Multi-Person Pose Estimation
on Shelf
3D Multi-Person Pose Estimation
Multi-Person Pose Estimation
1 code implementation • ICCV 2021 • Muhammed Kocabas, Chun-Hao P. Huang, Joachim Tesch, Lea Müller, Otmar Hilliges, Michael J. Black
We then train a novel network that concatenates the camera calibration to the image features and uses these together to regress 3D body shape and pose.
Ranked #1 on
3D Human Pose Estimation
on AGORA
no code implementations • 23 Sep 2021 • Korrawe Karunratanakul, Adrian Spurr, Zicong Fan, Otmar Hilliges, Siyu Tang
We present Hand ArticuLated Occupancy (HALO), a novel representation of articulated hands that bridges the advantages of 3D keypoints and neural implicit surfaces and can be used in end-to-end trainable architectures.
1 code implementation • 1 Jul 2021 • Zicong Fan, Adrian Spurr, Muhammed Kocabas, Siyu Tang, Michael J. Black, Otmar Hilliges
In natural conversation and interaction, our hands often overlap or are in contact with each other.
Ranked #1 on
3D Interacting Hand Pose Estimation
on InterHand2.6M
1 code implementation • ICCV 2021 • Adrian Spurr, Aneesh Dahiya, Xi Wang, Xucong Zhang, Otmar Hilliges
Encouraged by the success of contrastive learning on image classification tasks, we propose a new self-supervised method for the structured regression task of 3D hand pose estimation.
no code implementations • 10 Jun 2021 • Adrian Spurr, Pavlo Molchanov, Umar Iqbal, Jan Kautz, Otmar Hilliges
Hand pose estimation is difficult due to different environmental conditions, object- and self-occlusion as well as diversity in hand shape and appearance.
1 code implementation • ICCV 2021 • Muhammed Kocabas, Chun-Hao P. Huang, Otmar Hilliges, Michael J. Black
Despite significant progress, we show that state of the art 3D human pose and shape estimation methods remain sensitive to partial occlusion and can produce dramatically wrong predictions although much of the body is observable.
Ranked #2 on
3D Multi-Person Pose Estimation
on AGORA
3D human pose and shape estimation
3D Multi-Person Pose Estimation
1 code implementation • ICCV 2021 • Marcel C. Bühler, Abhimitra Meka, Gengyan Li, Thabo Beeler, Otmar Hilliges
In this paper, we propose VariTex - to the best of our knowledge the first method that learns a variational latent feature space of neural face textures, which allows sampling of novel identities.
1 code implementation • ICCV 2021 • Xu Chen, Yufeng Zheng, Michael J. Black, Otmar Hilliges, Andreas Geiger
However, this is problematic since the backward warp field is pose dependent and thus requires large amounts of data to learn.
no code implementations • 22 Feb 2021 • Nikola Vulin, Sammy Christen, Stefan Stevsic, Otmar Hilliges
In this paper we address the challenge of exploration in deep reinforcement learning for robotic manipulation tasks.
no code implementations • 19 Jan 2021 • Alexis E. Block, Sammy Christen, Roger Gassert, Otmar Hilliges, Katherine J. Kuchenbecker
We followed all six tenets to create a new robotic platform, HuggieBot 2. 0, that has a soft, warm, inflated body (HuggieChest) and uses visual and haptic sensing to deliver closed-loop hugging.
Robotics
no code implementations • 5 Jan 2021 • Stefan Stevsic, Otmar Hilliges
Our main insight is that after the initial pose estimate, it is important to pay attention to distinct spatial features of the object in order to improve the estimation accuracy during alignment.
1 code implementation • ICCV 2021 • Manuel Kaufmann, Yi Zhao, Chengcheng Tang, Lingling Tao, Christopher Twigg, Jie Song, Robert Wang, Otmar Hilliges
To this end, we present a method to estimate SMPL parameters from 6-12 EM sensors.
2 code implementations • NeurIPS 2020 • Yufeng Zheng, Seonwook Park, Xucong Zhang, Shalini De Mello, Otmar Hilliges
Furthermore, we show that in the presence of limited amounts of real-world training data, our method allows for improvements in the downstream task of semi-supervised cross-dataset gaze estimation.
1 code implementation • 22 Oct 2020 • Manuel Kaufmann, Emre Aksan, Jie Song, Fabrizio Pece, Remo Ziegler, Otmar Hilliges
At the heart of our approach lies the idea to cast motion infilling as an inpainting problem and to train a convolutional de-noising autoencoder on image-like representations of motion sequences.
no code implementations • ECCV 2020 • Jie Song, Xu Chen, Otmar Hilliges
We propose a novel algorithm for the fitting of 3D human shape to images.
no code implementations • ECCV 2020 • Xu Chen, Zijian Dong, Jie Song, Andreas Geiger, Otmar Hilliges
Many object pose estimation algorithms rely on the analysis-by-synthesis framework which requires explicit representations of individual object instances.
1 code implementation • ECCV 2020 • Xucong Zhang, Seonwook Park, Thabo Beeler, Derek Bradley, Siyu Tang, Otmar Hilliges
We show that our dataset can significantly improve the robustness of gaze estimation methods across different head poses and gaze angles.
Ranked #1 on
Gaze Estimation
on MPSGaze
1 code implementation • ECCV 2020 • Seonwook Park, Emre Aksan, Xucong Zhang, Otmar Hilliges
Estimating eye-gaze from images alone is a challenging task, in large parts due to un-observable person-specific factors.
1 code implementation • NeurIPS 2020 • Emre Aksan, Thomas Deselaers, Andrea Tagliasacchi, Otmar Hilliges
We demonstrate qualitatively and quantitatively that our proposed approach is able to model the appearance of individual strokes, as well as the compositional structure of larger diagram drawings.
1 code implementation • 18 Apr 2020 • Emre Aksan, Manuel Kaufmann, Peng Cao, Otmar Hilliges
We propose a novel Transformer-based architecture for the task of generative modelling of 3D human motion.
no code implementations • ECCV 2020 • Adrian Spurr, Umar Iqbal, Pavlo Molchanov, Otmar Hilliges, Jan Kautz
Estimating 3D hand pose from 2D images is a difficult, inverse problem due to the inherent scale and depth ambiguities.
2 code implementations • 20 Feb 2020 • Philippe Gervais, Thomas Deselaers, Emre Aksan, Otmar Hilliges
We are releasing a dataset of diagram drawings with dynamic drawing information.
no code implementations • 14 Feb 2020 • Sammy Christen, Lukas Jendele, Emre Aksan, Otmar Hilliges
We present HiDe, a novel hierarchical reinforcement learning architecture that successfully solves long horizon control tasks and generalizes to unseen test scenarios.
no code implementations • 4 Jan 2020 • Christoph Gebhardt, Antti Oulasvirta, Otmar Hilliges
The results support hierarchical RL as a plausible model of task interleaving.
Hierarchical Reinforcement Learning
reinforcement-learning
+1
1 code implementation • 8 Nov 2019 • Marcel Bühler, Seonwook Park, Shalini De Mello, Xucong Zhang, Otmar Hilliges
Accurately labeled real-world training data can be scarce, and hence recent works adapt, modify or generate images to boost target datasets.
1 code implementation • ICCV 2019 • Emre Aksan, Manuel Kaufmann, Otmar Hilliges
This is implemented via a hierarchy of small-sized neural networks connected analogously to the kinematic chains in the human body as well as a joint-wise decomposition in the loss function.
no code implementations • 25 Sep 2019 • Lukas Jendele, Sammy Christen, Emre Aksan, Otmar Hilliges
Hierarchical Reinforcement Learning (HRL) has held the promise to enhance the capabilities of RL agents via operation on different levels of temporal abstraction.
no code implementations • 28 Jun 2019 • Stefan Stevsic, Tobias Naegeli, Javier Alonso-Mora, Otmar Hilliges
This enables an easy to implement learning algorithm that is robust to errors of the model used in the model predictive controller.
no code implementations • 27 Jun 2019 • Sammy Christen, Stefan Stevsic, Otmar Hilliges
In this paper, we propose a method for training control policies for human-robot interactions such as handshakes or hand claps via Deep Reinforcement Learning.
1 code implementation • ICCV 2019 • Seonwook Park, Shalini De Mello, Pavlo Molchanov, Umar Iqbal, Otmar Hilliges, Jan Kautz
Inter-personal anatomical differences limit the accuracy of person-independent gaze estimation networks.
Ranked #1 on
Gaze Estimation
on MPII Gaze
(using extra training data)
1 code implementation • ICCV 2019 • Zhe He, Adrian Spurr, Xucong Zhang, Otmar Hilliges
In this work, we present a novel method to alleviate this problem by leveraging generative adversarial training to synthesize an eye image conditioned on a target gaze direction.
1 code implementation • ICLR 2019 • Emre Aksan, Otmar Hilliges
Convolutional architectures have recently been shown to be competitive on many sequence modelling tasks when compared to the de-facto standard of recurrent neural networks (RNNs), while providing computational and modeling advantages due to inherent parallelism.
1 code implementation • 8 Jan 2019 • Xu Chen, Jie Song, Otmar Hilliges
This paper studies the task of full generative modelling of realistic images of humans, guided only by coarse sketch of the pose, while providing control over the specific instance or type of outfit worn by the user.
2 code implementations • ICCV 2019 • Xu Chen, Jie Song, Otmar Hilliges
The approach is self-supervised and only requires 2D images and associated view transforms for training.
no code implementations • ICCV 2019 • Jie Song, Bjoern Andres, Michael Black, Otmar Hilliges, Siyu Tang
The new optimization problem can be viewed as a Conditional Random Field (CRF) in which the random variables are associated with the binary edge labels of the initial graph and the hard constraints are introduced in the CRF as high-order potentials.
1 code implementation • 10 Oct 2018 • Yinghao Huang, Manuel Kaufmann, Emre Aksan, Michael J. Black, Otmar Hilliges, Gerard Pons-Moll
To learn from sufficient data, we synthesize IMU data from motion capture datasets.
1 code implementation • ECCV 2018 • Seonwook Park, Adrian Spurr, Otmar Hilliges
In this paper, we introduce a novel deep neural network architecture specifically designed for the task of gaze estimation from single eye input.
no code implementations • ECCV 2018 • Benjamin Hepp, Debadeepta Dey, Sudipta N. Sinha, Ashish Kapoor, Neel Joshi, Otmar Hilliges
We propose to learn a better utility function that predicts the usefulness of future viewpoints.
2 code implementations • 12 May 2018 • Seonwook Park, Xucong Zhang, Andreas Bulling, Otmar Hilliges
Conventional feature-based and model-based gaze estimation methods have proven to perform well in settings with controlled illumination and specialized cameras.
1 code implementation • CVPR 2018 • Adrian Spurr, Jie Song, Seonwook Park, Otmar Hilliges
Furthermore, we show that our proposed method can be used without changes on depth images and performs comparably to specialized methods.
1 code implementation • 25 Jan 2018 • Emre Aksan, Fabrizio Pece, Otmar Hilliges
Digital ink promises to combine the flexibility and aesthetics of handwriting and the ability to process, search and edit digital text.
2 code implementations • 14 Jul 2017 • Adrian Spurr, Emre Aksan, Otmar Hilliges
In this paper we propose a new semi-supervised GAN architecture (ss-InfoGAN) for image synthesis that leverages information from few labels (as little as 0. 22%, max.
no code implementations • 25 May 2017 • Benjamin Hepp, Matthias Nießner, Otmar Hilliges
We introduce a new method that efficiently computes a set of viewpoints and trajectories for high-quality 3D reconstructions in outdoor environments.
no code implementations • 10 Apr 2017 • Partha Ghosh, Jie Song, Emre Aksan, Otmar Hilliges
Furthermore, we propose new evaluation protocols to assess the quality of synthetic motion sequences even for which no ground truth data exists.
no code implementations • CVPR 2017 • Jie Song, Li-Min Wang, Luc van Gool, Otmar Hilliges
Temporal information can provide additional cues about the location of body joints and help to alleviate these issues.
Ranked #4 on
Pose Estimation
on UPenn Action