Search Results for author: Dimitrios Tzionas

Found 14 papers, 12 papers with code

Populating 3D Scenes by Learning Human-Scene Interaction

1 code implementation CVPR 2021 Mohamed Hassan, Partha Ghosh, Joachim Tesch, Dimitrios Tzionas, Michael J. Black

Second, we show that POSA's learned representation of body-scene interaction supports monocular human pose estimation that is consistent with a 3D scene, improving on the state of the art.

Pose Estimation

GRAB: A Dataset of Whole-Body Human Grasping of Objects

2 code implementations ECCV 2020 Omid Taheri, Nima Ghorbani, Michael J. Black, Dimitrios Tzionas

Training computers to understand, model, and synthesize human grasping requires a rich dataset containing complex 3D object shapes, detailed contact information, hand pose and shape, and the 3D body motion over time.

Grasp Contact Prediction Grasp Generation +1

Learning Multi-Human Optical Flow

2 code implementations24 Oct 2019 Anurag Ranjan, David T. Hoffmann, Dimitrios Tzionas, Siyu Tang, Javier Romero, Michael J. Black

Therefore, we develop a dataset of multi-human optical flow and train optical flow networks on this dataset.

Motion Capture Optical Flow Estimation

Resolving 3D Human Pose Ambiguities with 3D Scene Constraints

1 code implementation ICCV 2019 Mohamed Hassan, Vasileios Choutas, Dimitrios Tzionas, Michael J. Black

To motivate this, we show that current 3D human pose estimation methods produce results that are not consistent with the 3D scene.

3D Human Pose Estimation Motion Capture

Learning to Train with Synthetic Humans

2 code implementations2 Aug 2019 David T. Hoffmann, Dimitrios Tzionas, Micheal J. Black, Siyu Tang

Here we explore two variations of synthetic data for this challenging problem; a dataset with purely synthetic humans and a real dataset augmented with synthetic humans.

Pose Estimation

A Comparison of Directional Distances for Hand Pose Estimation

no code implementations3 Apr 2017 Dimitrios Tzionas, Juergen Gall

Benchmarking methods for 3d hand tracking is still an open problem due to the difficulty of acquiring ground truth data.

Hand Pose Estimation

Capturing Hand Motion with an RGB-D Sensor, Fusing a Generative Model with Salient Points

2 code implementations3 Apr 2017 Dimitrios Tzionas, Abhilash Srikantha, Pablo Aponte, Juergen Gall

In this work, we propose a framework for hand tracking that can capture the motion of two interacting hands using only a single, inexpensive RGB-D camera.

Motion Capture Pose Tracking

3D Object Reconstruction from Hand-Object Interactions

3 code implementations ICCV 2015 Dimitrios Tzionas, Juergen Gall

Recent advances have enabled 3d object reconstruction approaches using a single off-the-shelf RGB-D camera.

3D Object Reconstruction 3D Reconstruction

Reconstructing Articulated Rigged Models from RGB-D Videos

no code implementations6 Sep 2016 Dimitrios Tzionas, Juergen Gall

Although commercial and open-source software exist to reconstruct a static object from a sequence recorded with an RGB-D sensor, there is a lack of tools that build rigged models of articulated objects that deform realistically and can be used for tracking or animation.

Motion Segmentation

Capturing Hands in Action using Discriminative Salient Points and Physics Simulation

2 code implementations6 Jun 2015 Dimitrios Tzionas, Luca Ballan, Abhilash Srikantha, Pablo Aponte, Marc Pollefeys, Juergen Gall

Hand motion capture is a popular research field, recently gaining more attention due to the ubiquity of RGB-D sensors.

Motion Capture

Cannot find the paper you are looking for? You can Submit a new open access paper.