Search Results for author: Qianqian Wang

Found 20 papers, 9 papers with code

DynIBaR: Neural Dynamic Image-Based Rendering

no code implementations20 Nov 2022 Zhengqi Li, Qianqian Wang, Forrester Cole, Richard Tucker, Noah Snavely

Our system retains the advantages of prior methods in its ability to model complex scenes and view-dependent effects, but also enables synthesizing photo-realistic novel views from long videos featuring complex scene dynamics with unconstrained camera trajectories.

InfiniteNature-Zero: Learning Perpetual View Generation of Natural Scenes from Single Images

1 code implementation22 Jul 2022 Zhengqi Li, Qianqian Wang, Noah Snavely, Angjoo Kanazawa

We present a method for learning to generate unbounded flythrough videos of natural scenes starting from a single view, where this capability is learned from a collection of single photographs, without requiring camera poses or even multiple views of each scene.

Perpetual View Generation

3D Moments from Near-Duplicate Photos

no code implementations CVPR 2022 Qianqian Wang, Zhengqi Li, David Salesin, Noah Snavely, Brian Curless, Janne Kontkanen

As output, we produce a video that smoothly interpolates the scene motion from the first photo to the second, while also producing camera motion with parallax that gives a heightened sense of 3D.

Motion Interpolation

Neural 3D Scene Reconstruction with the Manhattan-world Assumption

1 code implementation CVPR 2022 Haoyu Guo, Sida Peng, Haotong Lin, Qianqian Wang, Guofeng Zhang, Hujun Bao, Xiaowei Zhou

Based on the Manhattan-world assumption, planar constraints are employed to regularize the geometry in floor and wall regions predicted by a 2D semantic segmentation network.

2D Semantic Segmentation 3D Reconstruction +2

Animatable Implicit Neural Representations for Creating Realistic Avatars from Videos

1 code implementation15 Mar 2022 Sida Peng, Zhen Xu, Junting Dong, Qianqian Wang, Shangzhan Zhang, Qing Shuai, Hujun Bao, Xiaowei Zhou

Some recent works have proposed to decompose a non-rigidly deforming scene into a canonical neural radiance field and a set of deformation fields that map observation-space points to the canonical space, thereby enabling them to learn the dynamic scene from images.

Ultrasonic Backscatter Communication for Implantable Medical Devices

no code implementations14 Feb 2022 Qianqian Wang, Quansheng Guan, Julian Cheng, Yuankun Tang

The tag backscatters the pulses based on the piezoelectric effect of a piezo transducer.

TAG

Neural Rays for Occlusion-aware Image-based Rendering

1 code implementation CVPR 2022 YuAn Liu, Sida Peng, Lingjie Liu, Qianqian Wang, Peng Wang, Christian Theobalt, Xiaowei Zhou, Wenping Wang

On such a 3D point, these generalization methods will include inconsistent image features from invisible views, which interfere with the radiance field construction.

Neural Rendering Novel View Synthesis +1

Animatable Neural Radiance Fields for Modeling Dynamic Human Bodies

1 code implementation ICCV 2021 Sida Peng, Junting Dong, Qianqian Wang, Shangzhan Zhang, Qing Shuai, Xiaowei Zhou, Hujun Bao

Moreover, the learned blend weight fields can be combined with input skeletal motions to generate new deformation fields to animate the human model.

PhySG: Inverse Rendering with Spherical Gaussians for Physics-based Material Editing and Relighting

no code implementations CVPR 2021 Kai Zhang, Fujun Luan, Qianqian Wang, Kavita Bala, Noah Snavely

We present PhySG, an end-to-end inverse rendering pipeline that includes a fully differentiable renderer and can reconstruct geometry, materials, and illumination from scratch from a set of RGB input images.

IBRNet: Learning Multi-View Image-Based Rendering

1 code implementation CVPR 2021 Qianqian Wang, Zhicheng Wang, Kyle Genova, Pratul Srinivasan, Howard Zhou, Jonathan T. Barron, Ricardo Martin-Brualla, Noah Snavely, Thomas Funkhouser

Unlike neural scene representation work that optimizes per-scene functions for rendering, we learn a generic view interpolation function that generalizes to novel scenes.

Neural Rendering Novel View Synthesis

Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans

3 code implementations CVPR 2021 Sida Peng, Yuanqing Zhang, Yinghao Xu, Qianqian Wang, Qing Shuai, Hujun Bao, Xiaowei Zhou

To this end, we propose Neural Body, a new human body representation which assumes that the learned neural representations at different frames share the same set of latent codes anchored to a deformable mesh, so that the observations across frames can be naturally integrated.

Novel View Synthesis Representation Learning

Designing Efficient Metal Contacts to Two-Dimensional Semiconductors MoSi$_2$N$_4$ and WSi$_2$N$_4$ Monolayers

no code implementations14 Dec 2020 Qianqian Wang, Liemao Cao, Shi-Jun Liang, Weikang Wu, Guangzhao Wang, Ching Hua Lee, Wee Liat Ong, Hui Ying Yang, Lay Kee Ang, Shengyuan A. Yang, Yee Sin Ang

Our findings reveal the potential of MoSi$_2$N$_4$ and WSi$_2$N$_4$ monolayers as a novel 2D material platform for designing high-performance and energy-efficient 2D nanodevices.

Mesoscale and Nanoscale Physics Materials Science Applied Physics Computational Physics

Hidden Footprints: Learning Contextual Walkability from 3D Human Trails

no code implementations ECCV 2020 Jin Sun, Hadar Averbuch-Elor, Qianqian Wang, Noah Snavely

Predicting where people can walk in a scene is important for many tasks, including autonomous driving systems and human behavior analysis.

Autonomous Driving

Learning Feature Descriptors using Camera Pose Supervision

1 code implementation ECCV 2020 Qianqian Wang, Xiaowei Zhou, Bharath Hariharan, Noah Snavely

Recent research on learned visual descriptors has shown promising improvements in correspondence estimation, a key component of many 3D vision tasks.

Generative Partial Multi-View Clustering

no code implementations29 Mar 2020 Qianqian Wang, Zhengming Ding, Zhiqiang Tao, Quanxue Gao, Yun Fu

Nowadays, with the rapid development of data collection sources and feature extraction methods, multi-view data are getting easy to obtain and have received increasing research attention in recent years, among which, multi-view clustering (MVC) forms a mainstream research direction and is widely used in data analysis.

Imputation

Lifelong Spectral Clustering

no code implementations27 Nov 2019 Gan Sun, Yang Cong, Qianqian Wang, Jun Li, Yun Fu

As a new spectral clustering task arrives, L2SC firstly transfers knowledge from both basis library and feature library to obtain encoding matrix, and further redefines the library base over time to maximize performance across all the clustering tasks.

Visual Tactile Fusion Object Clustering

no code implementations21 Nov 2019 Tao Zhang, Yang Cong, Gan Sun, Qianqian Wang, Zhenming Ding

To effectively benefit both visual and tactile modalities for object clustering, in this paper, we propose a deep Auto-Encoder-like Non-negative Matrix Factorization framework for visual-tactile fusion clustering.

Model Optimization

Representative Task Self-selection for Flexible Clustered Lifelong Learning

no code implementations6 Mar 2019 Gan Sun, Yang Cong, Qianqian Wang, Bineng Zhong, Yun Fu

Consider the lifelong machine learning paradigm whose objective is to learn a sequence of tasks depending on previous experiences, e. g., knowledge library or deep network weights.

Model Optimization Multi-Task Learning

Multi-Image Semantic Matching by Mining Consistent Features

1 code implementation CVPR 2018 Qianqian Wang, Xiaowei Zhou, Kostas Daniilidis

This work proposes a multi-image matching method to estimate semantic correspondences across multiple images.

Graph Matching

Cannot find the paper you are looking for? You can Submit a new open access paper.