Search Results for author: Xuelin Chen

Found 15 papers, 7 papers with code

CT4D: Consistent Text-to-4D Generation with Animatable Meshes

no code implementations15 Aug 2024 Ce Chen, Shaoli Huang, Xuelin Chen, Guangyi Chen, Xiaoguang Han, Kun Zhang, Mingming Gong

The primary challenges of our mesh-based framework involve stably generating a mesh with details that align with the text prompt while directly driving it and maintaining surface continuity.

CraftsMan: High-fidelity Mesh Generation with 3D Native Generation and Interactive Geometry Refiner

1 code implementation23 May 2024 Weiyu Li, Jiarui Liu, Rui Chen, Yixun Liang, Xuelin Chen, Ping Tan, Xiaoxiao Long

We present a novel generative 3D modeling system, coined CraftsMan, which can generate high-fidelity 3D geometries with highly varied shapes, regular mesh topologies, and detailed surfaces, and, notably, allows for refining the geometry in an interactive manner.

3D Generation 3D geometry

Taming Diffusion Probabilistic Models for Character Control

1 code implementation23 Apr 2024 Rui Chen, Mingyi Shi, Shaoli Huang, Ping Tan, Taku Komura, Xuelin Chen

We present a novel character control framework that effectively utilizes motion diffusion probabilistic models to generate high-quality and diverse character animations, responding in real-time to a variety of dynamic user-supplied control signals.

Computational Efficiency Diversity

SweetDreamer: Aligning Geometric Priors in 2D Diffusion for Consistent Text-to-3D

1 code implementation4 Oct 2023 Weiyu Li, Rui Chen, Xuelin Chen, Ping Tan

Therefore, we improve the consistency by aligning the 2D geometric priors in diffusion models with well-defined 3D shapes during the lifting, addressing the vast majority of the problem.

3D Generation Text to 3D

C$\cdot$ASE: Learning Conditional Adversarial Skill Embeddings for Physics-based Characters

no code implementations20 Sep 2023 Zhiyang Dou, Xuelin Chen, Qingnan Fan, Taku Komura, Wenping Wang

We present C$\cdot$ASE, an efficient and effective framework that learns conditional Adversarial Skill Embeddings for physics-based characters.

Imitation Learning

LivelySpeaker: Towards Semantic-Aware Co-Speech Gesture Generation

1 code implementation ICCV 2023 YiHao Zhi, Xiaodong Cun, Xuelin Chen, Xi Shen, Wen Guo, Shaoli Huang, Shenghua Gao

While previous methods are able to generate speech rhythm-synchronized gestures, the semantic context of the speech is generally lacking in the gesticulations.

Gesture Generation

Example-based Motion Synthesis via Generative Motion Matching

1 code implementation1 Jun 2023 Weiyu Li, Xuelin Chen, Peizhuo Li, Olga Sorkine-Hornung, Baoquan Chen

At the heart of our generative framework lies the generative motion matching module, which utilizes the bidirectional visual similarity as a generative cost function to motion matching, and operates in a multi-stage framework to progressively refine a random guess using exemplar motion matches.

Motion Generation Motion Synthesis

3D-Aware Object Goal Navigation via Simultaneous Exploration and Identification

no code implementations CVPR 2023 Jiazhao Zhang, Liu Dai, Fanpeng Meng, Qingnan Fan, Xuelin Chen, Kai Xu, He Wang

However, leveraging 3D scene representation can be prohibitively unpractical for policy learning in this floor-level task, due to low sample efficiency and expensive computational cost.

VAT-Mart: Learning Visual Action Trajectory Proposals for Manipulating 3D ARTiculated Objects

no code implementations ICLR 2022 Ruihai Wu, Yan Zhao, Kaichun Mo, Zizheng Guo, Yian Wang, Tianhao Wu, Qingnan Fan, Xuelin Chen, Leonidas Guibas, Hao Dong

In this paper, we propose object-centric actionable visual priors as a novel perception-interaction handshaking point that the perception system outputs more actionable guidance than kinematic structure estimation, by predicting dense geometry-aware, interaction-aware, and task-aware visual action affordance and trajectory proposals.

MoCo-Flow: Neural Motion Consensus Flow for Dynamic Humans in Stationary Monocular Cameras

no code implementations8 Jun 2021 Xuelin Chen, Weiyu Li, Daniel Cohen-Or, Niloy J. Mitra, Baoquan Chen

In this paper, we introduce Neural Motion Consensus Flow (MoCo-Flow), a representation that models dynamic humans in stationary monocular cameras using a 4D continuous time-variant function.

Towards a Neural Graphics Pipeline for Controllable Image Generation

no code implementations18 Jun 2020 Xuelin Chen, Daniel Cohen-Or, Baoquan Chen, Niloy J. Mitra

NGP decomposes the image into a set of interpretable appearance feature maps, uncovering direct control handles for controllable image generation.

Image Generation Neural Rendering

Multimodal Shape Completion via Conditional Generative Adversarial Networks

1 code implementation ECCV 2020 Rundi Wu, Xuelin Chen, Yixin Zhuang, Baoquan Chen

Several deep learning methods have been proposed for completing partial data from shape acquisition setups, i. e., filling the regions that were missing in the shape.

Diversity

Unpaired Point Cloud Completion on Real Scans using Adversarial Training

2 code implementations ICLR 2020 Xuelin Chen, Baoquan Chen, Niloy J. Mitra

As 3D scanning solutions become increasingly popular, several deep learning setups have been developed geared towards that task of scan completion, i. e., plausibly filling in regions there were missed in the raw scans.

Point Cloud Completion

Cannot find the paper you are looking for? You can Submit a new open access paper.