Search Results for author: Vickie Ye

Found 13 papers, 8 papers with code

MegaSaM: Accurate, Fast and Robust Structure and Motion from Casual Dynamic Videos

no code implementations CVPR 2025 Zhengqi Li, Richard Tucker, Forrester Cole, Qianqian Wang, Linyi Jin, Vickie Ye, Angjoo Kanazawa, Aleksander Holynski, Noah Snavely

We present a system that allows for accurate, fast, and robust estimation of camera parameters and depth maps from casual monocular videos of dynamic scenes.

Depth Estimation

MegaSaM: Accurate, Fast, and Robust Structure and Motion from Casual Dynamic Videos

1 code implementation5 Dec 2024 Zhengqi Li, Richard Tucker, Forrester Cole, Qianqian Wang, Linyi Jin, Vickie Ye, Angjoo Kanazawa, Aleksander Holynski, Noah Snavely

We present a system that allows for accurate, fast, and robust estimation of camera parameters and depth maps from casual monocular videos of dynamic scenes.

Depth Estimation

gsplat: An Open-Source Library for Gaussian Splatting

1 code implementation10 Sep 2024 Vickie Ye, RuiLong Li, Justin Kerr, Matias Turkulainen, Brent Yi, Zhuoyang Pan, Otto Seiskari, Jianbo Ye, Jeffrey Hu, Matthew Tancik, Angjoo Kanazawa

gsplat is an open-source library designed for training and developing Gaussian Splatting methods.

Mathematical Supplement for the $\texttt{gsplat}$ Library

1 code implementation4 Dec 2023 Vickie Ye, Angjoo Kanazawa

This report provides the mathematical details of the gsplat library, a modular toolbox for efficient differentiable Gaussian splatting, as proposed by Kerbl et al.

Generative Proxemics: A Prior for 3D Social Interaction from Images

1 code implementation CVPR 2024 Lea Müller, Vickie Ye, Georgios Pavlakos, Michael Black, Angjoo Kanazawa

To address this, we present a novel approach that learns a prior over the 3D proxemics two people in close social interaction and demonstrate its use for single-view 3D reconstruction.

3D Reconstruction Denoising +1

Decoupling Human and Camera Motion from Videos in the Wild

1 code implementation CVPR 2023 Vickie Ye, Georgios Pavlakos, Jitendra Malik, Angjoo Kanazawa

Our method robustly recovers the global 3D trajectories of people in challenging in-the-wild videos, such as PoseTrack.

pixelNeRF: Neural Radiance Fields from One or Few Images

2 code implementations CVPR 2021 Alex Yu, Vickie Ye, Matthew Tancik, Angjoo Kanazawa

This allows the network to be trained across multiple scenes to learn a scene prior, enabling it to perform novel view synthesis in a feed-forward manner from a sparse set of views (as few as one).

3D Reconstruction Generalizable Novel View Synthesis +2

Robust Guarantees for Perception-Based Control

no code implementations L4DC 2020 Sarah Dean, Nikolai Matni, Benjamin Recht, Vickie Ye

Motivated by vision-based control of autonomous vehicles, we consider the problem of controlling a known linear dynamical system for which partial state information, such as vehicle position, is extracted from complex and nonlinear data, such as a camera image.

Autonomous Vehicles Position

Inferring Light Fields From Shadows

1 code implementation CVPR 2018 Manel Baradad, Vickie Ye, Adam B. Yedidia, Frédo Durand, William T. Freeman, Gregory W. Wornell, Antonio Torralba

We present a method for inferring a 4D light field of a hidden scene from 2D shadows cast by a known occluder on a diffuse wall.

Turning Corners Into Cameras: Principles and Methods

no code implementations ICCV 2017 Katherine L. Bouman, Vickie Ye, Adam B. Yedidia, Fredo Durand, Gregory W. Wornell, Antonio Torralba, William T. Freeman

We show that walls and other obstructions with edges can be exploited as naturally-occurring "cameras" that reveal the hidden scenes beyond them.

Cannot find the paper you are looking for? You can Submit a new open access paper.