Search Results for author: Weipeng Xu

Found 40 papers, 10 papers with code

Monocular 3D Human Pose Estimation In The Wild Using Improved CNN Supervision

no code implementations29 Nov 2016 Dushyant Mehta, Helge Rhodin, Dan Casas, Pascal Fua, Oleksandr Sotnychenko, Weipeng Xu, Christian Theobalt

We propose a CNN-based approach for 3D human body pose estimation from single RGB images that addresses the issue of limited generalizability of models trained solely on the starkly limited publicly available 3D pose data.

Monocular 3D Human Pose Estimation Transfer Learning

VNect: Real-time 3D Human Pose Estimation with a Single RGB Camera

1 code implementation3 May 2017 Dushyant Mehta, Srinath Sridhar, Oleksandr Sotnychenko, Helge Rhodin, Mohammad Shafiei, Hans-Peter Seidel, Weipeng Xu, Dan Casas, Christian Theobalt

A real-time kinematic skeleton fitting method uses the CNN output to yield temporally stable 3D global pose reconstructions on the basis of a coherent kinematic skeleton.

3D Human Pose Estimation

MonoPerfCap: Human Performance Capture from Monocular Video

no code implementations7 Aug 2017 Weipeng Xu, Avishek Chatterjee, Michael Zollhöfer, Helge Rhodin, Dushyant Mehta, Hans-Peter Seidel, Christian Theobalt

Reconstruction from monocular video alone is drastically more challenging, since strong occlusions and the inherent depth ambiguity lead to a highly ill-posed reconstruction problem.

Monocular Reconstruction Pose Estimation +1

Single-Shot Multi-Person 3D Pose Estimation From Monocular RGB

6 code implementations9 Dec 2017 Dushyant Mehta, Oleksandr Sotnychenko, Franziska Mueller, Weipeng Xu, Srinath Sridhar, Gerard Pons-Moll, Christian Theobalt

Our approach uses novel occlusion-robust pose-maps (ORPM) which enable full body pose inference even under strong partial occlusions by other people and objects in the scene.

3D Human Pose Estimation 3D Multi-Person Pose Estimation (absolute) +2

Video Based Reconstruction of 3D People Models

1 code implementation CVPR 2018 Thiemo Alldieck, Marcus Magnor, Weipeng Xu, Christian Theobalt, Gerard Pons-Moll

This paper describes how to obtain accurate 3D body models and texture of arbitrary people from a single, monocular video in which a person is moving.

3D Reconstruction Surface Reconstruction +1

Mo2Cap2: Real-time Mobile 3D Motion Capture with a Cap-mounted Fisheye Camera

no code implementations15 Mar 2018 Weipeng Xu, Avishek Chatterjee, Michael Zollhoefer, Helge Rhodin, Pascal Fua, Hans-Peter Seidel, Christian Theobalt

We tackle these challenges based on a novel lightweight setup that converts a standard baseball cap to a device for high-quality pose estimation based on a single cap-mounted fisheye camera.

Ranked #6 on Egocentric Pose Estimation on GlobalEgoMocap Test Dataset (using extra training data)

3D Pose Estimation Egocentric Pose Estimation

A Hybrid Model for Identity Obfuscation by Face Replacement

no code implementations ECCV 2018 Qianru Sun, Ayush Tewari, Weipeng Xu, Mario Fritz, Christian Theobalt, Bernt Schiele

As more and more personal photos are shared and tagged in social media, avoiding privacy risks such as unintended recognition becomes increasingly challenging.

Face Generation

Deep Video Portraits

no code implementations29 May 2018 Hyeongwoo Kim, Pablo Garrido, Ayush Tewari, Weipeng Xu, Justus Thies, Matthias Nießner, Patrick Pérez, Christian Richardt, Michael Zollhöfer, Christian Theobalt

In order to enable source-to-target video re-animation, we render a synthetic target video with the reconstructed head animation parameters from a source video, and feed it into the trained network -- thus taking full control of the target.

Face Model

Detailed Human Avatars from Monocular Video

1 code implementation3 Aug 2018 Thiemo Alldieck, Marcus Magnor, Weipeng Xu, Christian Theobalt, Gerard Pons-Moll

We present a novel method for high detail-preserving human avatar creation from monocular video.

Neural Rendering and Reenactment of Human Actor Videos

no code implementations11 Sep 2018 Lingjie Liu, Weipeng Xu, Michael Zollhoefer, Hyeongwoo Kim, Florian Bernard, Marc Habermann, Wenping Wang, Christian Theobalt

In contrast to conventional human character rendering, we do not require the availability of a production-quality photo-realistic 3D model of the human, but instead rely on a video sequence in conjunction with a (medium-quality) controllable 3D template model of the person.

Generative Adversarial Network Image Generation +1

Learning a Disentangled Embedding for Monocular 3D Shape Retrieval and Pose Estimation

no code implementations24 Dec 2018 Kyaw Zaw Lin, Weipeng Xu, Qianru Sun, Christian Theobalt, Tat-Seng Chua

We propose a novel approach to jointly perform 3D shape retrieval and pose estimation from monocular images. In order to make the method robust to real-world image variations, e. g. complex textures and backgrounds, we learn an embedding space from 3D data that only includes the relevant information, namely the shape and pose.

3D Object Retrieval 3D Shape Classification +3

XNect: Real-time Multi-Person 3D Motion Capture with a Single RGB Camera

4 code implementations1 Jul 2019 Dushyant Mehta, Oleksandr Sotnychenko, Franziska Mueller, Weipeng Xu, Mohamed Elgharib, Pascal Fua, Hans-Peter Seidel, Helge Rhodin, Gerard Pons-Moll, Christian Theobalt

The first stage is a convolutional neural network (CNN) that estimates 2D and 3D pose features along with identity assignments for all visible joints of all individuals. We contribute a new architecture for this CNN, called SelecSLS Net, that uses novel selective long and short range skip connections to improve the information flow allowing for a drastically faster network without compromising accuracy.

3D Multi-Person Human Pose Estimation 3D Multi-Person Pose Estimation +1

Neural Human Video Rendering by Learning Dynamic Textures and Rendering-to-Video Translation

no code implementations14 Jan 2020 Lingjie Liu, Weipeng Xu, Marc Habermann, Michael Zollhoefer, Florian Bernard, Hyeongwoo Kim, Wenping Wang, Christian Theobalt

In this paper, we propose a novel human video synthesis method that approaches these limiting factors by explicitly disentangling the learning of time-coherent fine-scale details from the embedding of the human in 2D screen space.

Image-to-Image Translation Novel View Synthesis +1

DeepCap: Monocular Human Performance Capture Using Weak Supervision

no code implementations CVPR 2020 Marc Habermann, Weipeng Xu, Michael Zollhoefer, Gerard Pons-Moll, Christian Theobalt

Human performance capture is a highly important computer vision problem with many applications in movie production and virtual/augmented reality.

Pose Estimation

Monocular Real-time Hand Shape and Motion Capture using Multi-modal Data

2 code implementations CVPR 2020 Yuxiao Zhou, Marc Habermann, Weipeng Xu, Ikhsanul Habibie, Christian Theobalt, Feng Xu

We present a novel method for monocular hand shape and pose estimation at unprecedented runtime performance of 100fps and at state-of-the-art accuracy.

Pose Estimation

PhysCap: Physically Plausible Monocular 3D Motion Capture in Real Time

no code implementations20 Aug 2020 Soshi Shimada, Vladislav Golyanik, Weipeng Xu, Christian Theobalt

We, therefore, present PhysCap, the first algorithm for physically plausible, real-time and marker-less human 3D motion capture with a single colour camera at 25 fps.

Using Feature Alignment Can Improve Clean Average Precision and Adversarial Robustness in Object Detection

1 code implementation8 Dec 2020 Weipeng Xu, Hongcheng Huang, Shaoyou Pan

In this paper, we propose that using feature alignment of intermediate layer can improve clean AP and robustness in object detection.

Adversarial Attack Adversarial Robustness +3

Neural Re-Rendering of Humans from a Single Image

no code implementations ECCV 2020 Kripasindhu Sarkar, Dushyant Mehta, Weipeng Xu, Vladislav Golyanik, Christian Theobalt

Human re-rendering from a single image is a starkly under-constrained problem, and state-of-the-art algorithms often exhibit undesired artefacts, such as over-smoothing, unrealistic distortions of the body parts and garments, or implausible changes of the texture.

Translation

Learning Speech-driven 3D Conversational Gestures from Video

no code implementations13 Feb 2021 Ikhsanul Habibie, Weipeng Xu, Dushyant Mehta, Lingjie Liu, Hans-Peter Seidel, Gerard Pons-Moll, Mohamed Elgharib, Christian Theobalt

We propose the first approach to automatically and jointly synthesize both the synchronous 3D conversational body and hand gestures, as well as 3D face and head animations, of a virtual character from speech input.

3D Face Animation Generative Adversarial Network +2

Estimating Egocentric 3D Human Pose in Global Space

1 code implementation ICCV 2021 Jian Wang, Lingjie Liu, Weipeng Xu, Kripasindhu Sarkar, Christian Theobalt

Furthermore, these methods suffer from limited accuracy and temporal instability due to ambiguities caused by the monocular setup and the severe occlusion in a strongly distorted egocentric perspective.

Ranked #4 on Egocentric Pose Estimation on SceneEgo (using extra training data)

Egocentric Pose Estimation

Neural Monocular 3D Human Motion Capture with Physical Awareness

no code implementations3 May 2021 Soshi Shimada, Vladislav Golyanik, Weipeng Xu, Patrick Pérez, Christian Theobalt

We present a new trainable system for physically plausible markerless 3D human motion capture, which achieves state-of-the-art results in a broad range of challenging scenarios.

3D Pose Estimation

Real-time Deep Dynamic Characters

no code implementations4 May 2021 Marc Habermann, Lingjie Liu, Weipeng Xu, Michael Zollhoefer, Gerard Pons-Moll, Christian Theobalt

We propose a deep videorealistic 3D human character model displaying highly realistic shape, motion, and dynamic appearance learned in a new weakly supervised way from multi-view imagery.

Driving-Signal Aware Full-Body Avatars

no code implementations21 May 2021 Timur Bagautdinov, Chenglei Wu, Tomas Simon, Fabian Prada, Takaaki Shiratori, Shih-En Wei, Weipeng Xu, Yaser Sheikh, Jason Saragih

The core intuition behind our method is that better drivability and generalization can be achieved by disentangling the driving signals and remaining generative factors, which are not available during animation.

Imputation

Modeling Clothing as a Separate Layer for an Animatable Human Avatar

no code implementations28 Jun 2021 Donglai Xiang, Fabian Prada, Timur Bagautdinov, Weipeng Xu, Yuan Dong, He Wen, Jessica Hodgins, Chenglei Wu

To address these difficulties, we propose a method to build an animatable clothed body avatar with an explicit representation of the clothing on the upper body from multi-view captured videos.

Inverse Rendering

NRST: Non-rigid Surface Tracking from Monocular Video

no code implementations6 Jul 2021 Marc Habermann, Weipeng Xu, Helge Rhodin, Michael Zollhoefer, Gerard Pons-Moll, Christian Theobalt

Our texture term exploits the orientation information in the micro-structures of the objects, e. g., the yarn patterns of fabrics.

Robust and Accurate Object Detection via Self-Knowledge Distillation

1 code implementation14 Nov 2021 Weipeng Xu, Pengzhi Chu, Renhao Xie, Xiongziyan Xiao, Hongcheng Huang

In this paper, we propose Unified Decoupled Feature Alignment (UDFA), a novel fine-tuning paradigm which achieves better performance than existing methods, by fully exploring the combination between self-knowledge distillation and adversarial training for object detection.

Adversarial Robustness object-detection +3

A Deeper Look into DeepCap

no code implementations20 Nov 2021 Marc Habermann, Weipeng Xu, Michael Zollhoefer, Gerard Pons-Moll, Christian Theobalt

Human performance capture is a highly important computer vision problem with many applications in movie production and virtual/augmented reality.

Pose Estimation

Estimating Egocentric 3D Human Pose in the Wild with External Weak Supervision

no code implementations CVPR 2022 Jian Wang, Lingjie Liu, Weipeng Xu, Kripasindhu Sarkar, Diogo Luvizon, Christian Theobalt

Specifically, we first generate pseudo labels for the EgoPW dataset with a spatio-temporal optimization method by incorporating the external-view supervision.

Ranked #4 on Egocentric Pose Estimation on GlobalEgoMocap Test Dataset (using extra training data)

Egocentric Pose Estimation

HULC: 3D Human Motion Capture with Pose Manifold Sampling and Dense Contact Guidance

no code implementations11 May 2022 Soshi Shimada, Vladislav Golyanik, Zhi Li, Patrick Pérez, Weipeng Xu, Christian Theobalt

Marker-less monocular 3D human motion capture (MoCap) with scene interactions is a challenging research topic relevant for extended reality, robotics and virtual avatar generation.

Dressing Avatars: Deep Photorealistic Appearance for Physically Simulated Clothing

no code implementations30 Jun 2022 Donglai Xiang, Timur Bagautdinov, Tuur Stuyck, Fabian Prada, Javier Romero, Weipeng Xu, Shunsuke Saito, Jingfan Guo, Breannan Smith, Takaaki Shiratori, Yaser Sheikh, Jessica Hodgins, Chenglei Wu

The key idea is to introduce a neural clothing appearance model that operates on top of explicit geometry: at training time we use high-fidelity tracking, whereas at animation time we rely on physically simulated geometry.

HDHumans: A Hybrid Approach for High-fidelity Digital Humans

no code implementations21 Oct 2022 Marc Habermann, Lingjie Liu, Weipeng Xu, Gerard Pons-Moll, Michael Zollhoefer, Christian Theobalt

Photo-real digital human avatars are of enormous importance in graphics, as they enable immersive communication over the globe, improve gaming and entertainment experiences, and can be particularly beneficial for AR and VR settings.

Novel View Synthesis Surface Reconstruction +1

Scene-aware Egocentric 3D Human Pose Estimation

1 code implementation CVPR 2023 Jian Wang, Lingjie Liu, Weipeng Xu, Kripasindhu Sarkar, Diogo Luvizon, Christian Theobalt

To this end, we propose an egocentric depth estimation network to predict the scene depth map from a wide-view egocentric fisheye camera while mitigating the occlusion of the human body with a depth-inpainting network.

Ranked #3 on Egocentric Pose Estimation on GlobalEgoMocap Test Dataset (using extra training data)

Depth Estimation Egocentric Pose Estimation

Perpetual Humanoid Control for Real-time Simulated Avatars

no code implementations ICCV 2023 Zhengyi Luo, Jinkun Cao, Alexander Winkler, Kris Kitani, Weipeng Xu

We present a physics-based humanoid controller that achieves high-fidelity motion imitation and fault-tolerant behavior in the presence of noisy input (e. g. pose estimates from video or generated from language) and unexpected falls.

Humanoid Control

Universal Humanoid Motion Representations for Physics-Based Control

no code implementations6 Oct 2023 Zhengyi Luo, Jinkun Cao, Josh Merel, Alexander Winkler, Jing Huang, Kris Kitani, Weipeng Xu

We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control.

Humanoid Control

Real-Time Simulated Avatar from Head-Mounted Sensors

no code implementations11 Mar 2024 Zhengyi Luo, Jinkun Cao, Rawal Khirodkar, Alexander Winkler, Kris Kitani, Weipeng Xu

We present SimXR, a method for controlling a simulated avatar from information (headset pose and cameras) obtained from AR / VR headsets.

Egocentric Pose Estimation Humanoid Control +1

Cannot find the paper you are looking for? You can Submit a new open access paper.