Search Results for author: Zhengzhe Liu

Found 19 papers, 12 papers with code

CNS-Edit: 3D Shape Editing via Coupled Neural Shape Optimization

no code implementations4 Feb 2024 Jingyu Hu, Ka-Hei Hui, Zhengzhe Liu, Hao Zhang, Chi-Wing Fu

First, we design the coupled neural shape (CNS) representation for supporting 3D shape editing.

Make-A-Shape: a Ten-Million-scale 3D Shape Model

no code implementations20 Jan 2024 Ka-Hei Hui, Aditya Sanghi, Arianna Rampini, Kamal Rahimi Malekshan, Zhengzhe Liu, Hooman Shayani, Chi-Wing Fu

We then make the representation generatable by a diffusion model by devising the subband coefficients packing scheme to layout the representation in a low-resolution grid.

CLIPXPlore: Coupled CLIP and Shape Spaces for 3D Shape Exploration

no code implementations14 Jun 2023 Jingyu Hu, Ka-Hei Hui, Zhengzhe Liu, Hao Zhang, Chi-Wing Fu

This paper presents CLIPXPlore, a new framework that leverages a vision-language model to guide the exploration of the 3D shape space.

Attribute Language Modelling

You Only Need One Thing One Click: Self-Training for Weakly Supervised 3D Scene Understanding

1 code implementation26 Mar 2023 Zhengzhe Liu, Xiaojuan Qi, Chi-Wing Fu

3D scene understanding, e. g., point cloud semantic and instance segmentation, often requires large-scale annotated training data, but clearly, point-wise labels are too tedious to prepare.

3D Instance Segmentation Pseudo Label +4

DreamStone: Image as Stepping Stone for Text-Guided 3D Shape Generation

2 code implementations24 Mar 2023 Zhengzhe Liu, Peng Dai, Ruihui Li, Xiaojuan Qi, Chi-Wing Fu

The core of our approach is a two-stage feature-space alignment strategy that leverages a pre-trained single-view reconstruction (SVR) model to map CLIP features to shapes: to begin with, map the CLIP image feature to the detail-rich 3D shape space of the SVR model, then map the CLIP text feature to the 3D shape space through encouraging the CLIP-consistency between rendered images and the input text.

3D Shape Generation

Neural Wavelet-domain Diffusion for 3D Shape Generation, Inversion, and Manipulation

no code implementations1 Feb 2023 Jingyu Hu, Ka-Hei Hui, Zhengzhe Liu, Ruihui Li, Chi-Wing Fu

This paper presents a new approach for 3D shape generation, inversion, and manipulation, through a direct generative modeling on a continuous implicit representation in wavelet domain.

3D Shape Generation

Command-Driven Articulated Object Understanding and Manipulation

no code implementations CVPR 2023 Ruihang Chu, Zhengzhe Liu, Xiaoqing Ye, Xiao Tan, Xiaojuan Qi, Chi-Wing Fu, Jiaya Jia

The key of Cart is to utilize the prediction of object structures to connect visual observations with user commands for effective manipulations.

motion prediction Object +1

Sparse2Dense: Learning to Densify 3D Features for 3D Object Detection

1 code implementation23 Nov 2022 Tianyu Wang, Xiaowei Hu, Zhengzhe Liu, Chi-Wing Fu

Importantly, we formulate the lightweight plug-in S2D module and the point cloud reconstruction module in SDet to densify 3D features and train SDet to produce 3D features, following the dense 3D features in DDet.

3D Object Detection Domain Adaptation +2

ISS: Image as Stepping Stone for Text-Guided 3D Shape Generation

2 code implementations9 Sep 2022 Zhengzhe Liu, Peng Dai, Ruihui Li, Xiaojuan Qi, Chi-Wing Fu

Text-guided 3D shape generation remains challenging due to the absence of large paired text-shape data, the substantial semantic gap between these two modalities, and the structural complexity of 3D shapes.

3D Shape Generation

One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation

2 code implementations CVPR 2021 Zhengzhe Liu, Xiaojuan Qi, Chi-Wing Fu

Point cloud semantic segmentation often requires largescale annotated training data, but clearly, point-wise labels are too tedious to prepare.

3D Semantic Segmentation Relation Network +1

3D-to-2D Distillation for Indoor Scene Parsing

1 code implementation CVPR 2021 Zhengzhe Liu, Xiaojuan Qi, Chi-Wing Fu

First, we distill 3D knowledge from a pretrained 3D network to supervise a 2D network to learn simulated 3D features from 2D features during the training, so the 2D network can infer without requiring 3D data.

Scene Parsing Semantic Parsing +1

GeoNet++: Iterative Geometric Neural Network with Edge-Aware Refinement for Joint Depth and Surface Normal Estimation

2 code implementations13 Dec 2020 Xiaojuan Qi, Zhengzhe Liu, Renjie Liao, Philip H. S. Torr, Raquel Urtasun, Jiaya Jia

Note that GeoNet++ is generic and can be used in other depth/normal prediction frameworks to improve the quality of 3D reconstruction and pixel-wise accuracy of depth and surface normals.

3D Reconstruction Depth Estimation +2

Global Texture Enhancement for Fake Face Detection in the Wild

1 code implementation CVPR 2020 Zhengzhe Liu, Xiaojuan Qi, Philip Torr

In this paper, we conduct an empirical study on fake/real faces, and have two important observations: firstly, the texture of fake faces is substantially different from real ones; secondly, global texture statistics are more robust to image editing and transferable to fake faces from different GANs and datasets.

Face Detection Fake Image Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.