no code implementations • 23 Mar 2023 • Chong Bao, yinda zhang, Bangbang Yang, Tianxing Fan, Zesong Yang, Hujun Bao, Guofeng Zhang, Zhaopeng Cui
Despite the great success in 2D editing using user-friendly tools, such as Photoshop, semantic strokes, or even text prompts, similar capabilities in 3D areas are still limited, either relying on 3D modeling skills or allowing editing within only a few categories.
no code implementations • 14 Mar 2023 • Yijin Li, Zhaoyang Huang, Shuo Chen, Xiaoyu Shi, Hongsheng Li, Hujun Bao, Zhaopeng Cui, Guofeng Zhang
BlinkSim consists of a configurable rendering engine and a flexible engine for event data simulation.
no code implementations • 14 Mar 2023 • Junjie Ni, Yijin Li, Zhaoyang Huang, Hongsheng Li, Hujun Bao, Zhaopeng Cui, Guofeng Zhang
However, estimating scale differences between these patches is non-trivial since the scale differences are determined by both relative camera poses and scene structures, and thus spatially varying over image pairs.
no code implementations • 7 Feb 2023 • Zihan Zhu, Songyou Peng, Viktor Larsson, Zhaopeng Cui, Martin R. Oswald, Andreas Geiger, Marc Pollefeys
Neural implicit representations have recently become popular in simultaneous localization and mapping (SLAM), especially in dense visual SLAM.
1 code implementation • CVPR 2022 • Heng Li, Zhaopeng Cui, Shuaicheng Liu, Ping Tan
Our graph optimizer iteratively refines the global camera rotations by minimizing each node's single rotation objective function.
1 code implementation • 3 Oct 2022 • Guanglin Li, Yifeng Li, Zhichao Ye, Qihang Zhang, Tao Kong, Zhaopeng Cui, Guofeng Zhang
Then, by using a SIM(3)-invariant shape descriptor, we gracefully decouple the shape and pose of an object, thus supporting latent shape optimization of target objects in arbitrary poses.
Ranked #1 on
6D Pose Estimation using RGBD
on REAL275
1 code implementation • 2 Oct 2022 • Weicai Ye, Shuo Chen, Chong Bao, Hujun Bao, Marc Pollefeys, Zhaopeng Cui, Guofeng Zhang
Existing inverse rendering combined with neural rendering methods~\cite{zhang2021physg, zhang2022modeling} can only perform editable novel view synthesis on object-specific scenes, while we present intrinsic neural radiance fields, dubbed IntrinsicNeRF, which introduce intrinsic decomposition into the NeRF-based~\cite{mildenhall2020nerf} neural rendering method and can extend its application to room-scale scenes.
no code implementations • 27 Sep 2022 • Yijin Li, Xinyang Liu, Wenqi Dong, Han Zhou, Hujun Bao, Guofeng Zhang, yinda zhang, Zhaopeng Cui
Light-weight time-of-flight (ToF) depth sensors are small, cheap, low-energy and have been massively deployed on mobile devices for the purposes like autofocus, obstacle detection, etc.
no code implementations • 25 Jul 2022 • Bangbang Yang, Chong Bao, Junyi Zeng, Hujun Bao, yinda zhang, Zhaopeng Cui, Guofeng Zhang
Very recently neural implicit rendering techniques have been rapidly evolved and shown great advantages in novel view synthesis and 3D scene reconstruction.
no code implementations • 23 Jul 2022 • Zuoyue Li, Tianxing Fan, Zhenqiang Li, Zhaopeng Cui, Yoichi Sato, Marc Pollefeys, Martin R. Oswald
We introduce a scalable framework for novel view synthesis from RGB-D images with largely incomplete scene coverage.
1 code implementation • 18 Jul 2022 • Weicai Ye, Xingyuan Yu, Xinyue Lan, Yuhang Ming, Jinyu Li, Hujun Bao, Zhaopeng Cui, Guofeng Zhang
We present a novel dual-flow representation of scene motion that decomposes the optical flow into a static flow field caused by the camera motion and another dynamic flow field caused by the objects' movements in the scene.
no code implementations • 14 Jul 2022 • Boming Zhao, Bangbang Yang, Zhenyang Li, Zuoyue Li, Guofeng Zhang, Jiashu Zhao, Dawei Yin, Zhaopeng Cui, Hujun Bao
Expanding an existing tourist photo from a partially captured scene to a full scene is one of the desired experiences for photography applications.
1 code implementation • 4 Jul 2022 • Weicai Ye, Xinyue Lan, Shuo Chen, Yuhang Ming, Xingyuan Yu, Hujun Bao, Zhaopeng Cui, Guofeng Zhang
We present PVO, a novel panoptic visual odometry framework to achieve more comprehensive modeling of the scene motion, geometry, and panoptic segmentation information.
no code implementations • 13 Jun 2022 • Lei Wang, Linlin Ge, Shan Luo, Zihan Yan, Zhaopeng Cui, Jieqing Feng
Specifically, a novel structure is proposed, namely, {\textit{track-community}}, in which each community consists of a group of tracks and represents a local segment in the scene.
no code implementations • 5 May 2022 • Bangbang Yang, yinda zhang, Yijin Li, Zhaopeng Cui, Sean Fanello, Hujun Bao, Guofeng Zhang
We, as human beings, can understand and picture a familiar scene from arbitrary viewpoints given a single image, whereas this is still a grand challenge for computers.
no code implementations • 25 Mar 2022 • Xingrui Yang, Yuhang Ming, Zhaopeng Cui, Andrew Calway
It is well known that visual SLAM systems based on dense matching are locally accurate but are also susceptible to long-term drift and map corruption.
no code implementations • 2 Mar 2022 • Weicai Ye, Xinyue Lan, Ge Su, Hujun Bao, Zhaopeng Cui, Guofeng Zhang
Existing methods are mainly based on the trained instance embedding to maintain consistent panoptic segmentation.
no code implementations • CVPR 2022 • Luwei Yang, Rakesh Shrestha, Wenbo Li, Shuaicheng Liu, Guofeng Zhang, Zhaopeng Cui, Ping Tan
Standard visual localization methods build a priori 3D model of a scene which is used to establish correspondences against the 2D keypoints in a query image.
1 code implementation • CVPR 2022 • Zihan Zhu, Songyou Peng, Viktor Larsson, Weiwei Xu, Hujun Bao, Zhaopeng Cui, Martin R. Oswald, Marc Pollefeys
Neural implicit representations have recently shown encouraging results in various domains, including promising progress in simultaneous localization and mapping (SLAM).
no code implementations • 30 Nov 2021 • Sandro Lombardi, Bangbang Yang, Tianxing Fan, Hujun Bao, Guofeng Zhang, Marc Pollefeys, Zhaopeng Cui
In this work, we propose a novel neural implicit representation for the human body, which is fully differentiable and optimizable with disentangled shape and pose latent spaces.
no code implementations • 13 Oct 2021 • Qingshan Xu, Martin R. Oswald, Wenbing Tao, Marc Pollefeys, Zhaopeng Cui
However, existing recurrent methods only model the local dependencies in the depth domain, which greatly limits the capability of capturing the global scene context along the depth dimension.
no code implementations • ICCV 2021 • Bangbang Yang, yinda zhang, Yinghao Xu, Yijin Li, Han Zhou, Hujun Bao, Guofeng Zhang, Zhaopeng Cui
In this paper, we present a novel neural scene rendering system, which learns an object-compositional neural radiance field and produces realistic rendering with editing capability for a clustered and real-world scene.
1 code implementation • ICCV 2021 • Cheng Zhang, Zhaopeng Cui, Cai Chen, Shuaicheng Liu, Bing Zeng, Hujun Bao, yinda zhang
Panorama images have a much larger field-of-view thus naturally encode enriched scene context information compared to standard perspective images, which however is not well exploited in the previous scene understanding methods.
1 code implementation • ICCV 2021 • Shuang Song, Zhaopeng Cui, Rongjun Qin
Then the visibility information of multiple views is aggregated to generate a 3D mesh model by solving an optimization problem considering visibility in which a novel adaptive visibility weighting in surface determination is also introduced to suppress line of sight with a large incident angle.
no code implementations • ICCV 2021 • Xingkui Wei, Zhengqing Chen, Yanwei Fu, Zhaopeng Cui, yinda zhang
We present a deep learning pipeline that leverages network self-prior to recover a full 3D model consisting of both a triangular mesh and a texture map from the colored 3D point cloud.
1 code implementation • CVPR 2021 • Luwei Yang, Heng Li, Jamal Ahmed Rahim, Zhaopeng Cui, Ping Tan
These methods can suffer from bad initializations due to the noisy spanning tree or outliers in input relative rotations.
no code implementations • ICCV 2021 • Yawei Li, He Chen, Zhaopeng Cui, Radu Timofte, Marc Pollefeys, Gregory Chirikjian, Luc van Gool
In this paper, we aim at improving the computational efficiency of graph convolutional networks (GCNs) for learning on point clouds.
1 code implementation • CVPR 2021 • Ziqian Bai, Zhaopeng Cui, Xiaoming Liu, Ping Tan
This paper presents a method for riggable 3D face reconstruction from monocular images, which jointly estimates a personalized face rig and per-image parameters including expressions, poses, and illuminations.
1 code implementation • CVPR 2021 • Cheng Zhang, Zhaopeng Cui, yinda zhang, Bing Zeng, Marc Pollefeys, Shuaicheng Liu
We not only propose an image-based local structured implicit network to improve the object shape estimation, but also refine the 3D object pose and scene layout via a novel implicit scene graph neural network that exploits the implicit local object features.
Ranked #1 on
3D Shape Reconstruction
on Pix3D
1 code implementation • ICCV 2021 • Bing Wang, Changhao Chen, Zhaopeng Cui, Jie Qin, Chris Xiaoxuan Lu, Zhengdi Yu, Peijun Zhao, Zhen Dong, Fan Zhu, Niki Trigoni, Andrew Markham
Accurately describing and detecting 2D and 3D keypoints is crucial to establishing correspondences across images and point clouds.
no code implementations • 1 Jan 2021 • Yawei Li, He Chen, Zhaopeng Cui, Radu Timofte, Marc Pollefeys, Gregory Chirikjian, Luc van Gool
State-of-the-art GCNs adopt $K$-nearest neighbor (KNN) searches for local feature aggregation and feature extraction operations from layer to layer.
no code implementations • ICCV 2021 • Yijin Li, Han Zhou, Bangbang Yang, Ye Zhang, Zhaopeng Cui, Hujun Bao, Guofeng Zhang
Different from traditional video cameras, event cameras capture asynchronous events stream in which each event encodes pixel location, trigger time, and the polarity of the brightness changes.
no code implementations • ICCV 2021 • Zuoyue Li, Zhenqiang Li, Zhaopeng Cui, Rongjun Qin, Marc Pollefeys, Martin R. Oswald
For geometrical and temporal consistency, our approach explicitly creates a 3D point cloud representation of the scene and maintains dense 3D-2D correspondences across frames that reflect the geometric scene configuration inferred from the satellite view.
no code implementations • 26 Nov 2020 • Miao Liu, Dexin Yang, Yan Zhang, Zhaopeng Cui, James M. Rehg, Siyu Tang
We introduce a novel task of reconstructing a time series of second-person 3D human body meshes from monocular egocentric videos.
1 code implementation • CVPR 2020 • Feitong Tan, Hao Zhu, Zhaopeng Cui, Siyu Zhu, Marc Pollefeys, Ping Tan
Previous methods on estimating detailed human depth often require supervised training with `ground truth' depth data.
no code implementations • 18 Mar 2020 • Changhee Won, Hochang Seok, Zhaopeng Cui, Marc Pollefeys, Jongwoo Lim
In this paper, we present an omnidirectional localization and dense mapping system for a wide-baseline multiview stereo setup with ultra-wide field-of-view (FOV) fisheye cameras, which has a 360 degrees coverage of stereo observations of the environment.
1 code implementation • NeurIPS 2019 • Youwei Lyu, Zhaopeng Cui, Si Li, Marc Pollefeys, Boxin Shi
When we take photos through glass windows or doors, the transmitted background scene is often blended with undesirable reflection.
1 code implementation • CVPR 2020 • Shaohui Liu, yinda zhang, Songyou Peng, Boxin Shi, Marc Pollefeys, Zhaopeng Cui
We propose a differentiable sphere tracing algorithm to bridge the gap between inverse graphics methods and the recently proposed deep learning based implicit signed distance function.
no code implementations • ICCV 2019 • Zhaopeng Cui, Viktor Larsson, Marc Pollefeys
In this paper we consider the problem of relative pose estimation from two images with per-pixel polarimetric information.
1 code implementation • CVPR 2019 • Jiaxiong Qiu, Zhaopeng Cui, yinda zhang, Xingdi Zhang, Shuaicheng Liu, Bing Zeng, Marc Pollefeys
In this paper, we propose a deep learning architecture that produces accurate dense depth for the outdoor scene from a single color image and a sparse depth.
no code implementations • 17 Sep 2018 • Marcel Geppert, Peidong Liu, Zhaopeng Cui, Marc Pollefeys, Torsten Sattler
This results in a system that provides reliable and drift-less pose estimations for high speed autonomous driving.
Robotics
no code implementations • CVPR 2018 • Luwei Yang, Feitong Tan, Ao Li, Zhaopeng Cui, Yasutaka Furukawa, Ping Tan
This paper presents a novel polarimetric dense monocular SLAM (PDMS) algorithm based on a polarization camera.
no code implementations • CVPR 2017 • Zhaopeng Cui, Jinwei Gu, Boxin Shi, Ping Tan, Jan Kautz
Multi-view stereo relies on feature correspondences for 3D reconstruction, and thus is fundamentally flawed in dealing with featureless scenes.
no code implementations • ICCV 2015 • Zhaopeng Cui, Ping Tan
Depth images help to upgrade an essential matrix to a similarity transformation, which can determine the scale of relative translation.
no code implementations • 6 Mar 2015 • Zhaopeng Cui, Nianjuan Jiang, Chengzhou Tang, Ping Tan
This paper derives a novel linear position constraint for cameras seeing a common scene point, which leads to a direct linear method for global camera translation estimation.