no code implementations • ECCV 2020 • Mark Nishimura, David B. Lindell, Christopher Metzler, Gordon Wetzstein
Monocular depth estimation algorithms successfully predict the relative depth order of objects in a scene.
no code implementations • 28 May 2024 • Kejia Yin, Varshanth R. Rao, Ruowei Jiang, Xudong Liu, Parham Aarabi, David B. Lindell
Self-supervised landmark estimation is a challenging task that demands the formation of locally distinct feature representations to identify sparse facial landmarks in the absence of annotated data.
no code implementations • 9 Apr 2024 • Anagh Malik, Noah Juravsky, Ryan Po, Gordon Wetzstein, Kiriakos N. Kutulakos, David B. Lindell
Combined with this dataset, we introduce an efficient neural volume rendering framework based on the transient field.
no code implementations • 26 Mar 2024 • Sherwin Bahmani, Xian Liu, Yifan Wang, Ivan Skorokhodov, Victor Rong, Ziwei Liu, Xihui Liu, Jeong Joon Park, Sergey Tulyakov, Gordon Wetzstein, Andrea Tagliasacchi, David B. Lindell
We learn local deformations that conform to the global trajectory using supervision from a text-to-video model.
1 code implementation • 29 Nov 2023 • Sherwin Bahmani, Ivan Skorokhodov, Victor Rong, Gordon Wetzstein, Leonidas Guibas, Peter Wonka, Sergey Tulyakov, Jeong Joon Park, Andrea Tagliasacchi, David B. Lindell
Recent breakthroughs in text-to-4D generation rely on pre-trained text-to-image and text-to-video models to generate dynamic 3D scenes.
no code implementations • 17 Oct 2023 • Esther Y. H. Lin, Zhecheng Wang, Rebecca Lin, Daniel Miau, Florian Kainz, Jiawen Chen, Xuaner Cecilia Zhang, David B. Lindell, Kiriakos N. Kutulakos
Optical blur is an inherent property of any lens system and is challenging to model in modern cameras because of their complex optical elements.
no code implementations • NeurIPS 2023 • Anagh Malik, Parsa Mirdehghan, Sotiris Nousias, Kiriakos N. Kutulakos, David B. Lindell
Here, we propose a novel method for rendering transient NeRFs that take as input the raw, time-resolved photon count histograms measured by a single-photon lidar system, and we seek to render such histograms from novel views.
1 code implementation • 2 Mar 2023 • Chengnan Shentu, Enxu Li, Chaojun Chen, Puspita Triana Dewi, David B. Lindell, Jessica Burgner-Kahrs
A two-segment tendon-driven continuum robot is used for data collection and testing, demonstrating accurate (mean shape error of 0. 91 mm, or 0. 36% of robot length) and real-time (70 fps) shape sensing on real-world data.
no code implementations • ICCV 2023 • Mian Wei, Sotiris Nousias, Rahul Gulve, David B. Lindell, Kiriakos N. Kutulakos
We consider the problem of imaging a dynamic scene over an extreme range of timescales simultaneously--seconds to picoseconds--and doing so passively, without much light, and without any timing signals from the light source(s) emitting it.
no code implementations • CVPR 2023 • Samarth Sinha, Jason Y. Zhang, Andrea Tagliasacchi, Igor Gilitschenski, David B. Lindell
Camera pose estimation is a key step in standard 3D reconstruction pipelines that operate on a dense set of images of a single object or scene.
no code implementations • 28 Jun 2022 • Alexander W. Bergman, Petr Kellnhofer, Wang Yifan, Eric R. Chan, David B. Lindell, Gordon Wetzstein
Unsupervised learning of 3D-aware generative adversarial networks (GANs) using only collections of single-view 2D photographs has very recently made much progress.
1 code implementation • 1 Jun 2022 • Shayan shekarforoush, David B. Lindell, David J. Fleet, Marcus A. Brubaker
Coordinate networks like Multiplicative Filter Networks (MFNs) and BACON offer some control over the frequency spectrum used to represent continuous signals such as images or 3D volumes.
no code implementations • 1 Jun 2022 • Qingqing Zhao, David B. Lindell, Gordon Wetzstein
Given a sparse set of measurements, we are interested in recovering the initial condition or parameters of the PDE.
no code implementations • 25 Mar 2022 • Connor Z. Lin, David B. Lindell, Eric R. Chan, Gordon Wetzstein
Portrait image animation enables the post-capture adjustment of these attributes from a single image while maintaining a photorealistic reconstruction of the subject's likeness or identity.
1 code implementation • CVPR 2022 • David B. Lindell, Dave Van Veen, Jeong Joon Park, Gordon Wetzstein
These networks are trained to map continuous input coordinates to the value of a signal at each point.
1 code implementation • 6 May 2021 • Julien N. P. Martel, David B. Lindell, Connor Z. Lin, Eric R. Chan, Marco Monteiro, Gordon Wetzstein
Here, we introduce a new hybrid implicit-explicit network architecture and training strategy that adaptively allocates resources during training and inference based on the local complexity of a signal of interest.
1 code implementation • CVPR 2021 • David B. Lindell, Julien N. P. Martel, Gordon Wetzstein
For training, we instantiate the computational graph corresponding to the derivative of the network.
24 code implementations • NeurIPS 2020 • Vincent Sitzmann, Julien N. P. Martel, Alexander W. Bergman, David B. Lindell, Gordon Wetzstein
However, current network architectures for such implicit neural representations are incapable of modeling signals with fine detail, and fail to represent a signal's spatial and temporal derivatives, despite the fact that these are essential to many physical signals defined implicitly as the solution to partial differential equations.
no code implementations • 13 Dec 2019 • Christopher A. Metzler, David B. Lindell, Gordon Wetzstein
Non-line-of-sight (NLOS) imaging and tracking is an emerging technology that allows the shape or position of objects around corners or behind diffusers to be recovered from transient, time-of-flight measurements.
1 code implementation • CVPR 2019 • David B. Lindell, Gordon Wetzstein, Vladlen Koltun
Non-line-of-sight (NLOS) imaging enables unprecedented capabilities in a wide range of applications, including robotic and machine vision, remote sensing, autonomous vehicle navigation, and medical imaging.
no code implementations • CVPR 2017 • Matthew O'Toole, Felix Heide, David B. Lindell, Kai Zang, Steven Diamond, Gordon Wetzstein
Computer vision algorithms build on 2D images or 3D videos that capture dynamic events at the millisecond time scale.