Search Results for author: Zhenpei Yang

Found 11 papers, 7 papers with code

Extreme Relative Pose Estimation for RGB-D Scans via Scene Completion

1 code implementation CVPR 2019 Zhenpei Yang, Jeffrey Z. Pan, Linjie Luo, Xiaowei Zhou, Kristen Grauman, Qi-Xing Huang

In particular, instead of only performing scene completion from each individual scan, our approach alternates between relative pose estimation and scene completion.

Pose Estimation

HPNet: Deep Primitive Segmentation Using Hybrid Representations

1 code implementation ICCV 2021 Siming Yan, Zhenpei Yang, Chongyang Ma, Haibin Huang, Etienne Vouga, QiXing Huang

This paper introduces HPNet, a novel deep-learning approach for segmenting a 3D shape represented as a point cloud into primitive patches.

Clustering Segmentation

FvOR: Robust Joint Shape and Pose Optimization for Few-view Object Reconstruction

1 code implementation CVPR 2022 Zhenpei Yang, Zhile Ren, Miguel Angel Bautista, Zaiwei Zhang, Qi Shan, QiXing Huang

In this paper, we present FvOR, a learning-based object reconstruction method that predicts accurate 3D models given a few images with noisy input poses.

Object Reconstruction Pose Estimation

Extreme Relative Pose Network under Hybrid Representations

1 code implementation CVPR 2020 Zhenpei Yang, Siming Yan, Qi-Xing Huang

In this paper, we introduce a novel RGB-D based relative pose estimation approach that is suitable for small-overlapping or non-overlapping scans and can output multiple relative poses.

Pose Estimation Translation

Deep Generative Modeling for Scene Synthesis via Hybrid Representations

no code implementations6 Aug 2018 Zaiwei Zhang, Zhenpei Yang, Chongyang Ma, Linjie Luo, Alexander Huth, Etienne Vouga, Qi-Xing Huang

We show a principled way to train this model by combining discriminator losses for both a 3D object arrangement representation and a 2D image-based representation.

LSTM-based Whisper Detection

no code implementations20 Sep 2018 Zeynab Raeesy, Kellen Gillespie, Zhenpei Yang, Chengyuan Ma, Thomas Drugman, Jiacheng Gu, Roland Maas, Ariya Rastrow, Björn Hoffmeister

We prove that, with enough data, the LSTM model is indeed as capable of learning whisper characteristics from LFBE features alone compared to a simpler MLP model that uses both LFBE and features engineered for separating whisper and normal speech.

Benchmarking

Cannot find the paper you are looking for? You can Submit a new open access paper.