Search Results for author: Bharat Lal Bhatnagar

Found 18 papers, 7 papers with code

Learning to Reconstruct People in Clothing from a Single RGB Camera

1 code implementation CVPR 2019 Thiemo Alldieck, Marcus Magnor, Bharat Lal Bhatnagar, Christian Theobalt, Gerard Pons-Moll

We present a learning-based model to infer the personalized 3D shape of people from a few frames (1-8) of a monocular video in which the person is moving, in less than 10 seconds with a reconstruction accuracy of 5mm.

Multi-Garment Net: Learning to Dress 3D People from Images

6 code implementations ICCV 2019 Bharat Lal Bhatnagar, Garvita Tiwari, Christian Theobalt, Gerard Pons-Moll

We present Multi-Garment Network (MGN), a method to predict body shape and clothing, layered on top of the SMPL model from a few frames (1-8) of a video.

3D Human Pose Estimation 3D Shape Reconstruction From A Single 2D Image

Combining Implicit Function Learning and Parametric Models for 3D Human Reconstruction

1 code implementation ECCV 2020 Bharat Lal Bhatnagar, Cristian Sminchisescu, Christian Theobalt, Gerard Pons-Moll

In this work, we present methodology that combines detail-rich implicit functions and parametric representations in order to reconstruct 3D models of people that remain controllable and accurate even in the presence of clothing.

3D Human Pose Estimation 3D Human Reconstruction

SIZER: A Dataset and Model for Parsing 3D Clothing and Learning Size Sensitive 3D Clothing

1 code implementation ECCV 2020 Garvita Tiwari, Bharat Lal Bhatnagar, Tony Tung, Gerard Pons-Moll

SizerNet allows to estimate and visualize the dressing effect of a garment in various sizes, and ParserNet allows to edit clothing of an input mesh directly, removing the need for scan segmentation, which is a challenging problem in itself.

3D Human Pose Estimation

LoopReg: Self-supervised Learning of Implicit Surface Correspondences, Pose and Shape for 3D Human Mesh Registration

no code implementations NeurIPS 2020 Bharat Lal Bhatnagar, Cristian Sminchisescu, Christian Theobalt, Gerard Pons-Moll

Formulating this closed loop is not straightforward because it is not trivial to force the output of the NN to be on the surface of the human model - outside this surface the human model is not even defined.

Self-Supervised Learning

Adjoint Rigid Transform Network: Task-conditioned Alignment of 3D Shapes

no code implementations1 Feb 2021 Keyang Zhou, Bharat Lal Bhatnagar, Bernt Schiele, Gerard Pons-Moll

The remarkable result is that with only self-supervision, ART facilitates learning a unique canonical orientation for both rigid and nonrigid shapes, which leads to a notable boost in performance of aforementioned tasks.

Disentanglement

CHORE: Contact, Human and Object REconstruction from a single RGB image

1 code implementation5 Apr 2022 Xianghui Xie, Bharat Lal Bhatnagar, Gerard Pons-Moll

However, humans are constantly interacting with the surrounding objects, thus calling for models that can reason about not only the human but also the object and their interaction.

Object Object Reconstruction

BEHAVE: Dataset and Method for Tracking Human Object Interactions

1 code implementation CVPR 2022 Bharat Lal Bhatnagar, Xianghui Xie, Ilya A. Petrov, Cristian Sminchisescu, Christian Theobalt, Gerard Pons-Moll

We present BEHAVE dataset, the first full body human- object interaction dataset with multi-view RGBD frames and corresponding 3D SMPL and object fits along with the annotated contacts between them.

Human-Object Interaction Detection Mixed Reality +1

COUCH: Towards Controllable Human-Chair Interactions

no code implementations1 May 2022 Xiaohan Zhang, Bharat Lal Bhatnagar, Vladimir Guzov, Sebastian Starke, Gerard Pons-Moll

In this work, we study the problem of synthesizing scene interactions conditioned on different contact positions on the object.

Human-Object Interaction Detection Object

TOCH: Spatio-Temporal Object-to-Hand Correspondence for Motion Refinement

no code implementations16 May 2022 Keyang Zhou, Bharat Lal Bhatnagar, Jan Eric Lenssen, Gerard Pons-Moll

The core of our method are TOCH fields, a novel spatio-temporal representation for modeling correspondences between hands and objects during interaction.

Denoising Object +1

Visibility Aware Human-Object Interaction Tracking from Single RGB Camera

no code implementations CVPR 2023 Xianghui Xie, Bharat Lal Bhatnagar, Gerard Pons-Moll

In this work, we propose a novel method to track the 3D human, object, contacts between them, and their relative translation across frames from a single RGB camera, while being robust to heavy occlusions.

Human-Object Interaction Detection Object +1

NSF: Neural Surface Fields for Human Modeling from Monocular Depth

no code implementations ICCV 2023 Yuxuan Xue, Bharat Lal Bhatnagar, Riccardo Marin, Nikolaos Sarafianos, Yuanlu Xu, Gerard Pons-Moll, Tony Tung

Compared to existing approaches, our method eliminates the expensive per-frame surface extraction while maintaining mesh coherency, and is capable of reconstructing meshes with arbitrary resolution without retraining.

Computational Efficiency Virtual Try-on

GAN-Avatar: Controllable Personalized GAN-based Human Head Avatar

no code implementations22 Nov 2023 Berna Kabadayi, Wojciech Zielonka, Bharat Lal Bhatnagar, Gerard Pons-Moll, Justus Thies

For controlling the model, we learn a mapping from 3DMM facial expression parameters to the latent space of the generative model.

Image Generation

Template Free Reconstruction of Human-object Interaction with Procedural Interaction Generation

no code implementations12 Dec 2023 Xianghui Xie, Bharat Lal Bhatnagar, Jan Eric Lenssen, Gerard Pons-Moll

We generate 1M+ human-object interaction pairs in 3D and leverage this large-scale data to train our HDM (Hierarchical Diffusion Model), a novel method to reconstruct interacting human and unseen objects, without any templates.

Human-Object Interaction Detection Object

RoHM: Robust Human Motion Reconstruction via Diffusion

no code implementations16 Jan 2024 Siwei Zhang, Bharat Lal Bhatnagar, Yuanlu Xu, Alexander Winkler, Petr Kadlecek, Siyu Tang, Federica Bogo

We apply RoHM to a variety of tasks -- from motion reconstruction and denoising to spatial and temporal infilling.

Denoising

Cannot find the paper you are looking for? You can Submit a new open access paper.