Search Results for author: Lingjie Liu

Found 66 papers, 24 papers with code

Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold

5 code implementations18 May 2023 Xingang Pan, Ayush Tewari, Thomas Leimkühler, Lingjie Liu, Abhimitra Meka, Christian Theobalt

Synthesizing visual content that meets users' needs often requires flexible and precise controllability of the pose, shape, expression, and layout of the generated objects.

Image Manipulation Point Tracking +1

NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction

6 code implementations NeurIPS 2021 Peng Wang, Lingjie Liu, YuAn Liu, Christian Theobalt, Taku Komura, Wenping Wang

In NeuS, we propose to represent a surface as the zero-level set of a signed distance function (SDF) and develop a new volume rendering method to train a neural SDF representation.

Novel View Synthesis Surface Reconstruction

StyleNeRF: A Style-based 3D-Aware Generator for High-resolution Image Synthesis

1 code implementation ICLR 2022 Jiatao Gu, Lingjie Liu, Peng Wang, Christian Theobalt

We perform volume rendering only to produce a low-resolution feature map and progressively apply upsampling in 2D to address the first issue.

Image Generation

F$^{2}$-NeRF: Fast Neural Radiance Field Training with Free Camera Trajectories

1 code implementation28 Mar 2023 Peng Wang, YuAn Liu, Zhaoxi Chen, Lingjie Liu, Ziwei Liu, Taku Komura, Christian Theobalt, Wenping Wang

Based on our analysis, we further propose a novel space-warping method called perspective warping, which allows us to handle arbitrary trajectories in the grid-based NeRF framework.

Novel View Synthesis

Neural Sparse Voxel Fields

1 code implementation NeurIPS 2020 Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua, Christian Theobalt

We also demonstrate several challenging tasks, including multi-scene learning, free-viewpoint rendering of a moving human, and large-scale scene rendering.

SyncDreamer: Generating Multiview-consistent Images from a Single-view Image

2 code implementations7 Sep 2023 YuAn Liu, Cheng Lin, Zijiao Zeng, Xiaoxiao Long, Lingjie Liu, Taku Komura, Wenping Wang

In this paper, we present a novel diffusion model called that generates multiview-consistent images from a single-view image.

Image to 3D Novel View Synthesis +1

Multimodal Image Synthesis and Editing: The Generative AI Era

2 code implementations27 Dec 2021 Fangneng Zhan, Yingchen Yu, Rongliang Wu, Jiahui Zhang, Shijian Lu, Lingjie Liu, Adam Kortylewski, Christian Theobalt, Eric Xing

With superb power in modeling the interaction among multimodal information, multimodal image synthesis and editing has become a hot research topic in recent years.

Image Generation

NeuS2: Fast Learning of Neural Implicit Surfaces for Multi-view Reconstruction

1 code implementation ICCV 2023 Yiming Wang, Qin Han, Marc Habermann, Kostas Daniilidis, Christian Theobalt, Lingjie Liu

Recent methods for neural surface representation and rendering, for example NeuS, have demonstrated the remarkably high-quality reconstruction of static scenes.

Surface Reconstruction

NeRO: Neural Geometry and BRDF Reconstruction of Reflective Objects from Multiview Images

1 code implementation27 May 2023 YuAn Liu, Peng Wang, Cheng Lin, Xiaoxiao Long, Jiepeng Wang, Lingjie Liu, Taku Komura, Wenping Wang

We present a neural rendering-based method called NeRO for reconstructing the geometry and the BRDF of reflective objects from multiview images captured in an unknown environment.

Neural Rendering Object

Neural Rays for Occlusion-aware Image-based Rendering

1 code implementation CVPR 2022 YuAn Liu, Sida Peng, Lingjie Liu, Qianqian Wang, Peng Wang, Christian Theobalt, Xiaowei Zhou, Wenping Wang

On such a 3D point, these generalization methods will include inconsistent image features from invisible views, which interfere with the radiance field construction.

Neural Rendering Novel View Synthesis +1

Vid2Curve: Simultaneous Camera Motion Estimation and Thin Structure Reconstruction from an RGB Video

1 code implementation7 May 2020 Peng Wang, Lingjie Liu, Nenglun Chen, Hung-Kuo Chu, Christian Theobalt, Wenping Wang

We propose the first approach that simultaneously estimates camera motion and reconstructs the geometry of complex 3D thin structures in high quality from a color video captured by a handheld camera.

Motion Estimation Occlusion Handling +1

General Neural Gauge Fields

1 code implementation5 May 2023 Fangneng Zhan, Lingjie Liu, Adam Kortylewski, Christian Theobalt

In this work, we extend this problem to a general paradigm with a taxonomy of discrete \& continuous cases, and develop a learning framework to jointly optimize gauge transformations and neural fields.

Representation Learning

Initializing Models with Larger Ones

1 code implementation30 Nov 2023 Zhiqiu Xu, Yanjie Chen, Kirill Vishniakov, Yida Yin, Zhiqiang Shen, Trevor Darrell, Lingjie Liu, Zhuang Liu

Weight selection offers a new approach to leverage the power of pretrained models in resource-constrained settings, and we hope it can be a useful tool for training small models in the large-model era.

Knowledge Distillation

DreamEditor: Text-Driven 3D Scene Editing with Neural Fields

1 code implementation23 Jun 2023 Jingyu Zhuang, Chen Wang, Lingjie Liu, Liang Lin, Guanbin Li

Neural fields have achieved impressive advancements in view synthesis and scene reconstruction.

3D scene Editing

Unsupervised Learning of Intrinsic Structural Representation Points

1 code implementation CVPR 2020 Nenglun Chen, Lingjie Liu, Zhiming Cui, Runnan Chen, Duygu Ceylan, Changhe Tu, Wenping Wang

The 3D structure points produced by our method encode the shape structure intrinsically and exhibit semantic consistency across all the shape instances with similar structures.

Multi-view Depth Estimation using Epipolar Spatio-Temporal Networks

1 code implementation CVPR 2021 Xiaoxiao Long, Lingjie Liu, Wei Li, Christian Theobalt, Wenping Wang

We present a novel method for multi-view depth estimation from a single video, which is a critical task in various applications, such as perception, reconstruction and robot navigation.

Depth Estimation Robot Navigation

SEG-MAT: 3D Shape Segmentation Using Medial Axis Transform

1 code implementation22 Oct 2020 Cheng Lin, Lingjie Liu, Changjian Li, Leif Kobbelt, Bin Wang, Shiqing Xin, Wenping Wang

Segmenting arbitrary 3D objects into constituent parts that are structurally meaningful is a fundamental problem encountered in a wide range of computer graphics applications.

Segmentation

Occlusion-Aware Depth Estimation with Adaptive Normal Constraints

1 code implementation ECCV 2020 Xiaoxiao Long, Lingjie Liu, Christian Theobalt, Wenping Wang

We present a new learning-based method for multi-frame depth estimation from a color video, which is a fundamental problem in scene understanding, robot navigation or handheld 3D reconstruction.

3D Reconstruction Depth Estimation +2

Scene-aware Egocentric 3D Human Pose Estimation

1 code implementation CVPR 2023 Jian Wang, Lingjie Liu, Weipeng Xu, Kripasindhu Sarkar, Diogo Luvizon, Christian Theobalt

To this end, we propose an egocentric depth estimation network to predict the scene depth map from a wide-view egocentric fisheye camera while mitigating the occlusion of the human body with a depth-inpainting network.

Ranked #3 on Egocentric Pose Estimation on GlobalEgoMocap Test Dataset (using extra training data)

Depth Estimation Egocentric Pose Estimation

Structure-Aware Long Short-Term Memory Network for 3D Cephalometric Landmark Detection

1 code implementation21 Jul 2021 Runnan Chen, Yuexin Ma, Nenglun Chen, Lingjie Liu, Zhiming Cui, Yanhong Lin, Wenping Wang

Detecting 3D landmarks on cone-beam computed tomography (CBCT) is crucial to assessing and quantifying the anatomical abnormalities in 3D cephalometric analysis.

Graph Attention regression

Estimating Egocentric 3D Human Pose in Global Space

1 code implementation ICCV 2021 Jian Wang, Lingjie Liu, Weipeng Xu, Kripasindhu Sarkar, Christian Theobalt

Furthermore, these methods suffer from limited accuracy and temporal instability due to ambiguities caused by the monocular setup and the severe occlusion in a strongly distorted egocentric perspective.

Ranked #4 on Egocentric Pose Estimation on SceneEgo (using extra training data)

Egocentric Pose Estimation

Neural Rendering and Reenactment of Human Actor Videos

no code implementations11 Sep 2018 Lingjie Liu, Weipeng Xu, Michael Zollhoefer, Hyeongwoo Kim, Florian Bernard, Marc Habermann, Wenping Wang, Christian Theobalt

In contrast to conventional human character rendering, we do not require the availability of a production-quality photo-realistic 3D model of the human, but instead rely on a video sequence in conjunction with a (medium-quality) controllable 3D template model of the person.

Generative Adversarial Network Image Generation +1

Neural Human Video Rendering by Learning Dynamic Textures and Rendering-to-Video Translation

no code implementations14 Jan 2020 Lingjie Liu, Weipeng Xu, Marc Habermann, Michael Zollhoefer, Florian Bernard, Hyeongwoo Kim, Wenping Wang, Christian Theobalt

In this paper, we propose a novel human video synthesis method that approaches these limiting factors by explicitly disentangling the learning of time-coherent fine-scale details from the embedding of the human in 2D screen space.

Image-to-Image Translation Novel View Synthesis +1

MulayCap: Multi-layer Human Performance Capture Using A Monocular Video Camera

no code implementations13 Apr 2020 Zhaoqi Su, Weilin Wan, Tao Yu, Lingjie Liu, Lu Fang, Wenping Wang, Yebin Liu

We introduce MulayCap, a novel human performance capture method using a monocular video camera without the need for pre-scanning.

Learnable Motion Coherence for Correspondence Pruning

no code implementations CVPR 2021 YuAn Liu, Lingjie Liu, Cheng Lin, Zhen Dong, Wenping Wang

We propose a novel formulation of fitting coherent motions with a smooth function on a graph of correspondences and show that this formulation allows a closed-form solution by graph Laplacian.

Pose Estimation

Pose-Guided Human Animation from a Single Image in the Wild

no code implementations CVPR 2021 Jae Shin Yoon, Lingjie Liu, Vladislav Golyanik, Kripasindhu Sarkar, Hyun Soo Park, Christian Theobalt

We present a new pose transfer method for synthesizing a human animation from a single image of a person controlled by a sequence of body poses.

Pose Transfer

Learning Speech-driven 3D Conversational Gestures from Video

no code implementations13 Feb 2021 Ikhsanul Habibie, Weipeng Xu, Dushyant Mehta, Lingjie Liu, Hans-Peter Seidel, Gerard Pons-Moll, Mohamed Elgharib, Christian Theobalt

We propose the first approach to automatically and jointly synthesize both the synchronous 3D conversational body and hand gestures, as well as 3D face and head animations, of a virtual character from speech input.

3D Face Animation Generative Adversarial Network +2

Style and Pose Control for Image Synthesis of Humans from a Single Monocular View

no code implementations22 Feb 2021 Kripasindhu Sarkar, Vladislav Golyanik, Lingjie Liu, Christian Theobalt

Photo-realistic re-rendering of a human from a single image with explicit control over body pose, shape and appearance enables a wide range of applications, such as human appearance transfer, virtual try-on, motion imitation, and novel view synthesis.

Image Generation Novel View Synthesis +1

HumanGAN: A Generative Model of Humans Images

no code implementations11 Mar 2021 Kripasindhu Sarkar, Lingjie Liu, Vladislav Golyanik, Christian Theobalt

We address these limitations and present a generative model for images of dressed humans offering control over pose, local body part appearance and garment style.

Pose Transfer

Real-time Deep Dynamic Characters

no code implementations4 May 2021 Marc Habermann, Lingjie Liu, Weipeng Xu, Michael Zollhoefer, Gerard Pons-Moll, Christian Theobalt

We propose a deep videorealistic 3D human character model displaying highly realistic shape, motion, and dynamic appearance learned in a new weakly supervised way from multi-view imagery.

Semi-supervised Anatomical Landmark Detection via Shape-regulated Self-training

no code implementations28 May 2021 Runnan Chen, Yuexin Ma, Lingjie Liu, Nenglun Chen, Zhiming Cui, Guodong Wei, Wenping Wang

The global shape constraint is the inherent property of anatomical landmarks that provides valuable guidance for more consistent pseudo labelling of the unlabeled data, which is ignored in the previously semi-supervised methods.

Neural Actor: Neural Free-view Synthesis of Human Actors with Pose Control

no code implementations3 Jun 2021 Lingjie Liu, Marc Habermann, Viktor Rudnev, Kripasindhu Sarkar, Jiatao Gu, Christian Theobalt

To address this problem, we utilize a coarse body model as the proxy to unwarp the surrounding 3D space into a canonical pose.

EgoRenderer: Rendering Human Avatars from Egocentric Camera Images

no code implementations ICCV 2021 Tao Hu, Kripasindhu Sarkar, Lingjie Liu, Matthias Zwicker, Christian Theobalt

We next combine the target pose image and the textures into a combined feature image, which is transformed into the output color image using a neural image translation network.

Texture Synthesis Translation

NeRF for Outdoor Scene Relighting

no code implementations9 Dec 2021 Viktor Rudnev, Mohamed Elgharib, William Smith, Lingjie Liu, Vladislav Golyanik, Christian Theobalt

Photorealistic editing of outdoor scenes from photographs requires a profound understanding of the image formation process and an accurate estimation of the scene geometry, reflectance and illumination.

Estimating Egocentric 3D Human Pose in the Wild with External Weak Supervision

no code implementations CVPR 2022 Jian Wang, Lingjie Liu, Weipeng Xu, Kripasindhu Sarkar, Diogo Luvizon, Christian Theobalt

Specifically, we first generate pseudo labels for the EgoPW dataset with a spatio-temporal optimization method by incorporating the external-view supervision.

Ranked #4 on Egocentric Pose Estimation on GlobalEgoMocap Test Dataset (using extra training data)

Egocentric Pose Estimation

Direct Dense Pose Estimation

no code implementations4 Apr 2022 Liqian Ma, Lingjie Liu, Christian Theobalt, Luc van Gool

In addition, DDP is computationally more efficient than previous dense pose estimation methods, and it reduces jitters when applied to a video sequence, which is a problem plaguing the previous methods.

Action Recognition Pose Estimation +2

Physics Informed Neural Fields for Smoke Reconstruction with Sparse Data

no code implementations14 Jun 2022 Mengyu Chu, Lingjie Liu, Quan Zheng, Erik Franz, Hans-Peter Seidel, Christian Theobalt, Rhaleb Zayer

With a hybrid architecture that separates static and dynamic contents, fluid interactions with static obstacles are reconstructed for the first time without additional geometry input or human labeling.

GAN2X: Non-Lambertian Inverse Rendering of Image GANs

no code implementations18 Jun 2022 Xingang Pan, Ayush Tewari, Lingjie Liu, Christian Theobalt

2D images are observations of the 3D physical world depicted with the geometry, material, and illumination components.

3D Face Reconstruction Inverse Rendering

Learn to Predict How Humans Manipulate Large-sized Objects from Interactive Motions

no code implementations25 Jun 2022 Weilin Wan, Lei Yang, Lingjie Liu, Zhuoying Zhang, Ruixing Jia, Yi-King Choi, Jia Pan, Christian Theobalt, Taku Komura, Wenping Wang

We also observe that an object's intrinsic physical properties are useful for the object motion prediction, and thus design a set of object dynamic descriptors to encode such intrinsic properties.

Human-Object Interaction Detection motion prediction +1

NeuRIS: Neural Reconstruction of Indoor Scenes Using Normal Priors

no code implementations27 Jun 2022 Jiepeng Wang, Peng Wang, Xiaoxiao Long, Christian Theobalt, Taku Komura, Lingjie Liu, Wenping Wang

The key idea of NeuRIS is to integrate estimated normal of indoor scenes as a prior in a neural rendering framework for reconstructing large texture-less shapes and, importantly, to do this in an adaptive manner to also enable the reconstruction of irregular shapes with fine details.

3D Reconstruction Neural Rendering

Progressively-connected Light Field Network for Efficient View Synthesis

no code implementations10 Jul 2022 Peng Wang, YuAn Liu, Guying Lin, Jiatao Gu, Lingjie Liu, Taku Komura, Wenping Wang

ProLiF encodes a 4D light field, which allows rendering a large batch of rays in one training step for image- or patch-level losses.

Novel View Synthesis

Neural Novel Actor: Learning a Generalized Animatable Neural Representation for Human Actors

no code implementations25 Aug 2022 Yiming Wang, Qingzhe Gao, Libin Liu, Lingjie Liu, Christian Theobalt, Baoquan Chen

The learned representation can be used to synthesize novel view images of an arbitrary person from a sparse set of cameras, and further animate them with the user's pose control.

Attribute

HDHumans: A Hybrid Approach for High-fidelity Digital Humans

no code implementations21 Oct 2022 Marc Habermann, Lingjie Liu, Weipeng Xu, Gerard Pons-Moll, Michael Zollhoefer, Christian Theobalt

Photo-real digital human avatars are of enormous importance in graphics, as they enable immersive communication over the globe, improve gaming and entertainment experiences, and can be particularly beneficial for AR and VR settings.

Novel View Synthesis Surface Reconstruction +1

NeuralUDF: Learning Unsigned Distance Fields for Multi-view Reconstruction of Surfaces with Arbitrary Topologies

no code implementations CVPR 2023 Xiaoxiao Long, Cheng Lin, Lingjie Liu, YuAn Liu, Peng Wang, Christian Theobalt, Taku Komura, Wenping Wang

In this paper, we propose to represent surfaces as the Unsigned Distance Function (UDF) and develop a new volume rendering scheme to learn the neural UDF representation.

Neural Rendering

NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from 3D-aware Diffusion

no code implementations20 Feb 2023 Jiatao Gu, Alex Trevithick, Kai-En Lin, Josh Susskind, Christian Theobalt, Lingjie Liu, Ravi Ramamoorthi

Novel view synthesis from a single image requires inferring occluded regions of objects and scenes whilst simultaneously maintaining semantic and physical consistency with the input.

Novel View Synthesis

Control3Diff: Learning Controllable 3D Diffusion Models from Single-view Images

no code implementations13 Apr 2023 Jiatao Gu, Qingzhe Gao, Shuangfei Zhai, Baoquan Chen, Lingjie Liu, Josh Susskind

To address these challenges, We present Control3Diff, a 3D diffusion model that combines the strengths of diffusion models and 3D GANs for versatile, controllable 3D-aware image synthesis for single-view datasets.

3D-Aware Image Synthesis

F2-NeRF: Fast Neural Radiance Field Training With Free Camera Trajectories

no code implementations CVPR 2023 Peng Wang, YuAn Liu, Zhaoxi Chen, Lingjie Liu, Ziwei Liu, Taku Komura, Christian Theobalt, Wenping Wang

Existing fast grid-based NeRF training frameworks, like Instant-NGP, Plenoxels, DVGO, or TensoRF, are mainly designed for bounded scenes and rely on space warping to handle unbounded scenes.

Novel View Synthesis

State of the Art on Diffusion Models for Visual Computing

no code implementations11 Oct 2023 Ryan Po, Wang Yifan, Vladislav Golyanik, Kfir Aberman, Jonathan T. Barron, Amit H. Bermano, Eric Ryan Chan, Tali Dekel, Aleksander Holynski, Angjoo Kanazawa, C. Karen Liu, Lingjie Liu, Ben Mildenhall, Matthias Nießner, Björn Ommer, Christian Theobalt, Peter Wonka, Gordon Wetzstein

The field of visual computing is rapidly advancing due to the emergence of generative artificial intelligence (AI), which unlocks unprecedented capabilities for the generation, editing, and reconstruction of images, videos, and 3D scenes.

Wonder3D: Single Image to 3D using Cross-Domain Diffusion

no code implementations23 Oct 2023 Xiaoxiao Long, Yuan-Chen Guo, Cheng Lin, YuAn Liu, Zhiyang Dou, Lingjie Liu, Yuexin Ma, Song-Hai Zhang, Marc Habermann, Christian Theobalt, Wenping Wang

In this work, we introduce Wonder3D, a novel method for efficiently generating high-fidelity textured meshes from single-view images. Recent methods based on Score Distillation Sampling (SDS) have shown the potential to recover 3D geometry from 2D diffusion priors, but they typically suffer from time-consuming per-shape optimization and inconsistent geometry.

Image to 3D

GART: Gaussian Articulated Template Models

no code implementations27 Nov 2023 Jiahui Lei, Yufu Wang, Georgios Pavlakos, Lingjie Liu, Kostas Daniilidis

We introduce Gaussian Articulated Template Model GART, an explicit, efficient, and expressive representation for non-rigid articulated subject capturing and rendering from monocular videos.

Egocentric Whole-Body Motion Capture with FisheyeViT and Diffusion-Based Motion Refinement

no code implementations28 Nov 2023 Jian Wang, Zhe Cao, Diogo Luvizon, Lingjie Liu, Kripasindhu Sarkar, Danhang Tang, Thabo Beeler, Christian Theobalt

In this work, we explore egocentric whole-body motion capture using a single fisheye camera, which simultaneously estimates human body and hand motion.

 Ranked #1 on Egocentric Pose Estimation on GlobalEgoMocap Test Dataset (using extra training data)

Egocentric Pose Estimation Hand Detection +2

TLControl: Trajectory and Language Control for Human Motion Synthesis

no code implementations28 Nov 2023 Weilin Wan, Zhiyang Dou, Taku Komura, Wenping Wang, Dinesh Jayaraman, Lingjie Liu

Controllable human motion synthesis is essential for applications in AR/VR, gaming, movies, and embodied AI.

Motion Synthesis

DiffusionPhase: Motion Diffusion in Frequency Domain

no code implementations7 Dec 2023 Weilin Wan, Yiming Huang, Shutong Wu, Taku Komura, Wenping Wang, Dinesh Jayaraman, Lingjie Liu

In this study, we introduce a learning-based method for generating high-quality human motion sequences from text descriptions (e. g., ``A person walks forward").

DressCode: Autoregressively Sewing and Generating Garments from Text Guidance

no code implementations29 Jan 2024 Kai He, Kaixin Yao, Qixuan Zhang, Jingyi Yu, Lingjie Liu, Lan Xu

For our framework, we first introduce SewingGPT, a GPT-based architecture integrating cross-attention with text-conditioned embedding to generate sewing patterns with text guidance.

Language Modelling Large Language Model +2

Adaptive Surface Normal Constraint for Geometric Estimation from Monocular Images

no code implementations8 Feb 2024 Xiaoxiao Long, Yuhang Zheng, Yupeng Zheng, Beiwen Tian, Cheng Lin, Lingjie Liu, Hao Zhao, Guyue Zhou, Wenping Wang

We introduce a novel approach to learn geometries such as depth and surface normal from images while incorporating geometric context.

Depth Estimation

CoMo: Controllable Motion Generation through Language Guided Pose Code Editing

no code implementations20 Mar 2024 Yiming Huang, Weilin Wan, Yue Yang, Chris Callison-Burch, Mark Yatskar, Lingjie Liu

Text-to-motion models excel at efficient human motion generation, but existing approaches lack fine-grained controllability over the generation process.

Track Everything Everywhere Fast and Robustly

no code implementations26 Mar 2024 Yunzhou Song, Jiahui Lei, ZiYun Wang, Lingjie Liu, Kostas Daniilidis

We propose a novel test-time optimization approach for efficiently and robustly tracking any pixel at any time in a video.

Inductive Bias Monocular Depth Estimation

TRAM: Global Trajectory and Motion of 3D Humans from in-the-wild Videos

no code implementations26 Mar 2024 Yufu Wang, ZiYun Wang, Lingjie Liu, Kostas Daniilidis

We propose TRAM, a two-stage method to reconstruct a human's global trajectory and motion from in-the-wild videos.

Cannot find the paper you are looking for? You can Submit a new open access paper.