Search Results for author: Dan Casas

Found 22 papers, 4 papers with code

SMPLitex: A Generative Model and Dataset for 3D Human Texture Estimation from Single Image

no code implementations4 Sep 2023 Dan Casas, Marc Comino-Trinidad

We propose SMPLitex, a method for estimating and manipulating the complete 3D appearance of humans captured from a single image.

How Will It Drape Like? Capturing Fabric Mechanics from Depth Images

no code implementations13 Apr 2023 Carlos Rodriguez-Pardo, Melania Prieto-Martin, Dan Casas, Elena Garces

We propose a method to estimate the mechanical parameters of fabrics using a casual capture setup with a depth camera.

Data Augmentation Material Recognition +2

HandFlow: Quantifying View-Dependent 3D Ambiguity in Two-Hand Reconstruction with Normalizing Flow

no code implementations4 Oct 2022 Jiayi Wang, Diogo Luvizon, Franziska Mueller, Florian Bernard, Adam Kortylewski, Dan Casas, Christian Theobalt

Through this, we demonstrate the quality of our probabilistic reconstruction and show that explicit ambiguity modeling is better-suited for this challenging problem.


SNUG: Self-Supervised Neural Dynamic Garments

1 code implementation CVPR 2022 Igor Santesteban, Miguel A. Otaduy, Dan Casas

We present a self-supervised method to learn dynamic 3D deformations of garments worn by parametric human bodies.

A Survey on Intrinsic Images: Delving Deep Into Lambert and Beyond

no code implementations7 Dec 2021 Elena Garces, Carlos Rodriguez-Pardo, Dan Casas, Jorge Lopez-Moreno

Intrinsic imaging or intrinsic image decomposition has traditionally been described as the problem of decomposing an image into two layers: a reflectance, the albedo invariant color of the material; and a shading, produced by the interaction between light and geometry.

Intrinsic Image Decomposition

RGB2Hands: Real-Time Tracking of 3D Hand Interactions from Monocular RGB Video

no code implementations22 Jun 2021 Jiayi Wang, Franziska Mueller, Florian Bernard, Suzanne Sorli, Oleksandr Sotnychenko, Neng Qian, Miguel A. Otaduy, Dan Casas, Christian Theobalt

Moreover, we demonstrate that our approach offers previously unseen two-hand tracking performance from RGB, and quantitatively and qualitatively outperforms existing RGB-based methods that were not explicitly designed for two-hand interactions.

3D Reconstruction Sign Language Recognition

Self-Supervised Collision Handling via Generative 3D Garment Models for Virtual Try-On

1 code implementation CVPR 2021 Igor Santesteban, Nils Thuerey, Miguel A. Otaduy, Dan Casas

We propose a new generative model for 3D garment deformations that enables us to learn, for the first time, a data-driven method for virtual try-on that effectively addresses garment-body collisions.

Virtual Try-on

Fully Convolutional Graph Neural Networks for Parametric Virtual Try-On

no code implementations9 Sep 2020 Raquel Vidaurre, Igor Santesteban, Elena Garces, Dan Casas

Then, after a mesh topology optimization step where we generate a sufficient level of detail for the input garment type, we further deform the mesh to reproduce deformations caused by the target body shape.

Virtual Try-on

SoftSMPL: Data-driven Modeling of Nonlinear Soft-tissue Dynamics for Parametric Humans

no code implementations1 Apr 2020 Igor Santesteban, Elena Garces, Miguel A. Otaduy, Dan Casas

We present SoftSMPL, a learning-based method to model realistic soft-tissue dynamics as a function of body shape and motion.

Learning-Based Animation of Clothing for Virtual Try-On

1 code implementation17 Mar 2019 Igor Santesteban, Miguel A. Otaduy, Dan Casas

We propose a model that separates global garment fit, due to body shape, from local garment wrinkles, due to both pose dynamics and body shape.

Virtual Try-on

BRDF Estimation of Complex Materials with Nested Learning

no code implementations22 Nov 2018 Raquel Vidaurre, Dan Casas, Elena Garces, Jorge Lopez-Moreno

The estimation of the optical properties of a material from RGB-images is an important but extremely ill-posed problem in Computer Graphics.

BRDF estimation

VNect: Real-time 3D Human Pose Estimation with a Single RGB Camera

1 code implementation3 May 2017 Dushyant Mehta, Srinath Sridhar, Oleksandr Sotnychenko, Helge Rhodin, Mohammad Shafiei, Hans-Peter Seidel, Weipeng Xu, Dan Casas, Christian Theobalt

A real-time kinematic skeleton fitting method uses the CNN output to yield temporally stable 3D global pose reconstructions on the basis of a coherent kinematic skeleton.

3D Human Pose Estimation

EgoCap: Egocentric Marker-less Motion Capture with Two Fisheye Cameras (Extended Abstract)

no code implementations31 Dec 2016 Helge Rhodin, Christian Richardt, Dan Casas, Eldar Insafutdinov, Mohammad Shafiei, Hans-Peter Seidel, Bernt Schiele, Christian Theobalt

Marker-based and marker-less optical skeletal motion-capture methods use an outside-in arrangement of cameras placed around a scene, with viewpoints converging on the center.

Pose Estimation

Monocular 3D Human Pose Estimation In The Wild Using Improved CNN Supervision

no code implementations29 Nov 2016 Dushyant Mehta, Helge Rhodin, Dan Casas, Pascal Fua, Oleksandr Sotnychenko, Weipeng Xu, Christian Theobalt

We propose a CNN-based approach for 3D human body pose estimation from single RGB images that addresses the issue of limited generalizability of models trained solely on the starkly limited publicly available 3D pose data.

Monocular 3D Human Pose Estimation Transfer Learning

Model-based Outdoor Performance Capture

no code implementations21 Oct 2016 Nadia Robertini, Dan Casas, Helge Rhodin, Hans-Peter Seidel, Christian Theobalt

We propose a new model-based method to accurately reconstruct human performances captured outdoors in a multi-camera setup.

Edge Detection

Real-time Joint Tracking of a Hand Manipulating an Object from RGB-D Input

no code implementations16 Oct 2016 Srinath Sridhar, Franziska Mueller, Michael Zollhöfer, Dan Casas, Antti Oulasvirta, Christian Theobalt

However, due to difficult occlusions, fast motions, and uniform hand appearance, jointly tracking hand and object pose is more challenging than tracking either of the two separately.

Object Tracking

EgoCap: Egocentric Marker-less Motion Capture with Two Fisheye Cameras

no code implementations23 Sep 2016 Helge Rhodin, Christian Richardt, Dan Casas, Eldar Insafutdinov, Mohammad Shafiei, Hans-Peter Seidel, Bernt Schiele, Christian Theobalt

We therefore propose a new method for real-time, marker-less and egocentric motion capture which estimates the full-body skeleton pose from a lightweight stereo pair of fisheye cameras that are attached to a helmet or virtual reality headset.

Pose Estimation Vocal Bursts Valence Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.