no code implementations • 15 Aug 2024 • Ce Chen, Shaoli Huang, Xuelin Chen, Guangyi Chen, Xiaoguang Han, Kun Zhang, Mingming Gong
The primary challenges of our mesh-based framework involve stably generating a mesh with details that align with the text prompt while directly driving it and maintaining surface continuity.
1 code implementation • 23 May 2024 • Weiyu Li, Jiarui Liu, Rui Chen, Yixun Liang, Xuelin Chen, Ping Tan, Xiaoxiao Long
We present a novel generative 3D modeling system, coined CraftsMan, which can generate high-fidelity 3D geometries with highly varied shapes, regular mesh topologies, and detailed surfaces, and, notably, allows for refining the geometry in an interactive manner.
1 code implementation • 23 Apr 2024 • Rui Chen, Mingyi Shi, Shaoli Huang, Ping Tan, Taku Komura, Xuelin Chen
We present a novel character control framework that effectively utilizes motion diffusion probabilistic models to generate high-quality and diverse character animations, responding in real-time to a variety of dynamic user-supplied control signals.
1 code implementation • 4 Oct 2023 • Weiyu Li, Rui Chen, Xuelin Chen, Ping Tan
Therefore, we improve the consistency by aligning the 2D geometric priors in diffusion models with well-defined 3D shapes during the lifting, addressing the vast majority of the problem.
no code implementations • 20 Sep 2023 • Zhiyang Dou, Xuelin Chen, Qingnan Fan, Taku Komura, Wenping Wang
We present C$\cdot$ASE, an efficient and effective framework that learns conditional Adversarial Skill Embeddings for physics-based characters.
1 code implementation • ICCV 2023 • YiHao Zhi, Xiaodong Cun, Xuelin Chen, Xi Shen, Wen Guo, Shaoli Huang, Shenghua Gao
While previous methods are able to generate speech rhythm-synchronized gestures, the semantic context of the speech is generally lacking in the gesticulations.
1 code implementation • 1 Jun 2023 • Weiyu Li, Xuelin Chen, Peizhuo Li, Olga Sorkine-Hornung, Baoquan Chen
At the heart of our generative framework lies the generative motion matching module, which utilizes the bidirectional visual similarity as a generative cost function to motion matching, and operates in a multi-stage framework to progressively refine a random guess using exemplar motion matches.
no code implementations • CVPR 2023 • Weiyu Li, Xuelin Chen, Jue Wang, Baoquan Chen
We target a 3D generative model for general natural scenes that are typically unique and intricate.
no code implementations • CVPR 2023 • Jiazhao Zhang, Liu Dai, Fanpeng Meng, Qingnan Fan, Xuelin Chen, Kai Xu, He Wang
However, leveraging 3D scene representation can be prohibitively unpractical for policy learning in this floor-level task, due to low sample efficiency and expensive computational cost.
no code implementations • 3 Oct 2022 • Yujie Wang, Xuelin Chen, Baoquan Chen
We present a 3D generative model for general natural scenes.
no code implementations • ICLR 2022 • Ruihai Wu, Yan Zhao, Kaichun Mo, Zizheng Guo, Yian Wang, Tianhao Wu, Qingnan Fan, Xuelin Chen, Leonidas Guibas, Hao Dong
In this paper, we propose object-centric actionable visual priors as a novel perception-interaction handshaking point that the perception system outputs more actionable guidance than kinematic structure estimation, by predicting dense geometry-aware, interaction-aware, and task-aware visual action affordance and trajectory proposals.
no code implementations • 8 Jun 2021 • Xuelin Chen, Weiyu Li, Daniel Cohen-Or, Niloy J. Mitra, Baoquan Chen
In this paper, we introduce Neural Motion Consensus Flow (MoCo-Flow), a representation that models dynamic humans in stationary monocular cameras using a 4D continuous time-variant function.
no code implementations • 18 Jun 2020 • Xuelin Chen, Daniel Cohen-Or, Baoquan Chen, Niloy J. Mitra
NGP decomposes the image into a set of interpretable appearance feature maps, uncovering direct control handles for controllable image generation.
1 code implementation • ECCV 2020 • Rundi Wu, Xuelin Chen, Yixin Zhuang, Baoquan Chen
Several deep learning methods have been proposed for completing partial data from shape acquisition setups, i. e., filling the regions that were missing in the shape.
2 code implementations • ICLR 2020 • Xuelin Chen, Baoquan Chen, Niloy J. Mitra
As 3D scanning solutions become increasingly popular, several deep learning setups have been developed geared towards that task of scan completion, i. e., plausibly filling in regions there were missed in the raw scans.