no code implementations • CVPR 2025 • Zhengqi Li, Richard Tucker, Forrester Cole, Qianqian Wang, Linyi Jin, Vickie Ye, Angjoo Kanazawa, Aleksander Holynski, Noah Snavely
We present a system that allows for accurate, fast, and robust estimation of camera parameters and depth maps from casual monocular videos of dynamic scenes.
1 code implementation • 5 Dec 2024 • Zhengqi Li, Richard Tucker, Forrester Cole, Qianqian Wang, Linyi Jin, Vickie Ye, Angjoo Kanazawa, Aleksander Holynski, Noah Snavely
We present a system that allows for accurate, fast, and robust estimation of camera parameters and depth maps from casual monocular videos of dynamic scenes.
no code implementations • CVPR 2025 • Brent Yi, Vickie Ye, Maya Zheng, Yunqi Li, Lea Müller, Georgios Pavlakos, Yi Ma, Jitendra Malik, Angjoo Kanazawa
We present EgoAllo, a system for human motion estimation from a head-mounted device.
1 code implementation • 10 Sep 2024 • Vickie Ye, RuiLong Li, Justin Kerr, Matias Turkulainen, Brent Yi, Zhuoyang Pan, Otto Seiskari, Jianbo Ye, Jeffrey Hu, Matthew Tancik, Angjoo Kanazawa
gsplat is an open-source library designed for training and developing Gaussian Splatting methods.
no code implementations • 18 Jul 2024 • Qianqian Wang, Vickie Ye, Hang Gao, Jake Austin, Zhengqi Li, Angjoo Kanazawa
Monocular dynamic reconstruction is a challenging and long-standing vision problem due to the highly ill-posed nature of the task.
1 code implementation • 4 Dec 2023 • Vickie Ye, Angjoo Kanazawa
This report provides the mathematical details of the gsplat library, a modular toolbox for efficient differentiable Gaussian splatting, as proposed by Kerbl et al.
1 code implementation • CVPR 2024 • Lea Müller, Vickie Ye, Georgios Pavlakos, Michael Black, Angjoo Kanazawa
To address this, we present a novel approach that learns a prior over the 3D proxemics two people in close social interaction and demonstrate its use for single-view 3D reconstruction.
1 code implementation • CVPR 2023 • Vickie Ye, Georgios Pavlakos, Jitendra Malik, Angjoo Kanazawa
Our method robustly recovers the global 3D trajectories of people in challenging in-the-wild videos, such as PoseTrack.
1 code implementation • CVPR 2022 • Vickie Ye, Zhengqi Li, Richard Tucker, Angjoo Kanazawa, Noah Snavely
We describe a method to extract persistent elements of a dynamic scene from an input video.
2 code implementations • CVPR 2021 • Alex Yu, Vickie Ye, Matthew Tancik, Angjoo Kanazawa
This allows the network to be trained across multiple scenes to learn a scene prior, enabling it to perform novel view synthesis in a feed-forward manner from a sparse set of views (as few as one).
Ranked #2 on
Generalizable Novel View Synthesis
on NERDS 360
no code implementations • L4DC 2020 • Sarah Dean, Nikolai Matni, Benjamin Recht, Vickie Ye
Motivated by vision-based control of autonomous vehicles, we consider the problem of controlling a known linear dynamical system for which partial state information, such as vehicle position, is extracted from complex and nonlinear data, such as a camera image.
1 code implementation • CVPR 2018 • Manel Baradad, Vickie Ye, Adam B. Yedidia, Frédo Durand, William T. Freeman, Gregory W. Wornell, Antonio Torralba
We present a method for inferring a 4D light field of a hidden scene from 2D shadows cast by a known occluder on a diffuse wall.
no code implementations • ICCV 2017 • Katherine L. Bouman, Vickie Ye, Adam B. Yedidia, Fredo Durand, Gregory W. Wornell, Antonio Torralba, William T. Freeman
We show that walls and other obstructions with edges can be exploited as naturally-occurring "cameras" that reveal the hidden scenes beyond them.