no code implementations • 23 May 2024 • Basile Van Hoorick, Rundi Wu, Ege Ozguroglu, Kyle Sargent, Ruoshi Liu, Pavel Tokmakov, Achal Dave, Changxi Zheng, Carl Vondrick
Accurate reconstruction of complex dynamic scenes from just a single viewpoint continues to be a challenging task in computer vision.
no code implementations • 19 Apr 2024 • Tianyuan Zhang, Hong-Xing Yu, Rundi Wu, Brandon Y. Feng, Changxi Zheng, Noah Snavely, Jiajun Wu, William T. Freeman
Unlike unconditional or text-conditioned dynamics generation, action-conditioned dynamics requires perceiving the physical material properties of objects and grounding the 3D motion prediction on these properties, such as object stiffness.
no code implementations • CVPR 2024 • Rundi Wu, Ben Mildenhall, Philipp Henzler, Keunhong Park, Ruiqi Gao, Daniel Watson, Pratul P. Srinivasan, Dor Verbin, Jonathan T. Barron, Ben Poole, Aleksander Holynski
3D reconstruction methods such as Neural Radiance Fields (NeRFs) excel at rendering photorealistic novel views of complex scenes.
1 code implementation • 24 May 2023 • Rundi Wu, Ruoshi Liu, Carl Vondrick, Changxi Zheng
Specifically, we encode the input 3D textured shape into triplane feature maps that represent the signed distance and texture fields of the input.
1 code implementation • ICCV 2023 • Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, Carl Vondrick
We introduce Zero-1-to-3, a framework for changing the camera viewpoint of an object given just a single RGB image.
no code implementations • 30 Sep 2022 • Honglin Chen, Rundi Wu, Eitan Grinspun, Changxi Zheng, Peter Yichen Chen
Whereas classical solvers can dynamically adapt their spatial representation only by resorting to complex remeshing algorithms, our INSR approach is intrinsically adaptive.
no code implementations • 5 Aug 2022 • Rundi Wu, Changxi Zheng
Existing generative models for 3D shapes are typically trained on a large 3D dataset, often of a specific object category.
1 code implementation • ICCV 2021 • Rundi Wu, Chang Xiao, Changxi Zheng
We present the first 3D generative model for a drastically different shape representation --- describing a shape as a sequence of computer-aided design (CAD) operations.
1 code implementation • NeurIPS 2020 • Ruilin Xu, Rundi Wu, Yuko Ishiwaka, Carl Vondrick, Changxi Zheng
We introduce a deep learning model for speech denoising, a long-standing challenge in audio analysis arising in numerous applications.
1 code implementation • ECCV 2020 • Rundi Wu, Xuelin Chen, Yixin Zhuang, Baoquan Chen
Several deep learning methods have been proposed for completing partial data from shape acquisition setups, i. e., filling the regions that were missing in the shape.
3 code implementations • CVPR 2020 • Rundi Wu, Yixin Zhuang, Kai Xu, Hao Zhang, Baoquan Chen
We introduce PQ-NET, a deep neural network which represents and generates 3D shapes via sequential part assembly.
2 code implementations • 5 May 2019 • Kfir Aberman, Rundi Wu, Dani Lischinski, Baoquan Chen, Daniel Cohen-Or
In order to achieve our goal, we learn to extract, directly from a video, a high-level latent motion representation, which is invariant to the skeleton geometry and the camera view.