Search Results for author: David B. Lindell

Found 20 papers, 7 papers with code

Flying with Photons: Rendering Novel Views of Propagating Light

no code implementations9 Apr 2024 Anagh Malik, Noah Juravsky, Ryan Po, Gordon Wetzstein, Kiriakos N. Kutulakos, David B. Lindell

Combined with this dataset, we introduce an efficient neural volume rendering framework based on the transient field.

Neural Rendering

4D-fy: Text-to-4D Generation Using Hybrid Score Distillation Sampling

no code implementations29 Nov 2023 Sherwin Bahmani, Ivan Skorokhodov, Victor Rong, Gordon Wetzstein, Leonidas Guibas, Peter Wonka, Sergey Tulyakov, Jeong Joon Park, Andrea Tagliasacchi, David B. Lindell

Recent breakthroughs in text-to-4D generation rely on pre-trained text-to-image and text-to-video models to generate dynamic 3D scenes.

Learning Lens Blur Fields

no code implementations17 Oct 2023 Esther Y. H. Lin, Zhecheng Wang, Rebecca Lin, Daniel Miau, Florian Kainz, Jiawen Chen, Xuaner Cecilia Zhang, David B. Lindell, Kiriakos N. Kutulakos

Optical blur is an inherent property of any lens system and is challenging to model in modern cameras because of their complex optical elements.

Transient Neural Radiance Fields for Lidar View Synthesis and 3D Reconstruction

no code implementations NeurIPS 2023 Anagh Malik, Parsa Mirdehghan, Sotiris Nousias, Kiriakos N. Kutulakos, David B. Lindell

Here, we propose a novel method for rendering transient NeRFs that take as input the raw, time-resolved photon count histograms measured by a single-photon lidar system, and we seek to render such histograms from novel views.

3D Reconstruction Autonomous Driving

MoSS: Monocular Shape Sensing for Continuum Robots

1 code implementation2 Mar 2023 Chengnan Shentu, Enxu Li, Chaojun Chen, Puspita Triana Dewi, David B. Lindell, Jessica Burgner-Kahrs

A two-segment tendon-driven continuum robot is used for data collection and testing, demonstrating accurate (mean shape error of 0. 91 mm, or 0. 36% of robot length) and real-time (70 fps) shape sensing on real-world data.

Passive Ultra-Wideband Single-Photon Imaging

no code implementations ICCV 2023 Mian Wei, Sotiris Nousias, Rahul Gulve, David B. Lindell, Kiriakos N. Kutulakos

We consider the problem of imaging a dynamic scene over an extreme range of timescales simultaneously--seconds to picoseconds--and doing so passively, without much light, and without any timing signals from the light source(s) emitting it.

SparsePose: Sparse-View Camera Pose Regression and Refinement

no code implementations CVPR 2023 Samarth Sinha, Jason Y. Zhang, Andrea Tagliasacchi, Igor Gilitschenski, David B. Lindell

Camera pose estimation is a key step in standard 3D reconstruction pipelines that operate on a dense set of images of a single object or scene.

3D Reconstruction Pose Estimation +1

Generative Neural Articulated Radiance Fields

no code implementations28 Jun 2022 Alexander W. Bergman, Petr Kellnhofer, Wang Yifan, Eric R. Chan, David B. Lindell, Gordon Wetzstein

Unsupervised learning of 3D-aware generative adversarial networks (GANs) using only collections of single-view 2D photographs has very recently made much progress.

Residual Multiplicative Filter Networks for Multiscale Reconstruction

1 code implementation1 Jun 2022 Shayan shekarforoush, David B. Lindell, David J. Fleet, Marcus A. Brubaker

Coordinate networks like Multiplicative Filter Networks (MFNs) and BACON offer some control over the frequency spectrum used to represent continuous signals such as images or 3D volumes.

Learning to Solve PDE-constrained Inverse Problems with Graph Networks

no code implementations1 Jun 2022 Qingqing Zhao, David B. Lindell, Gordon Wetzstein

Given a sparse set of measurements, we are interested in recovering the initial condition or parameters of the PDE.

3D GAN Inversion for Controllable Portrait Image Animation

no code implementations25 Mar 2022 Connor Z. Lin, David B. Lindell, Eric R. Chan, Gordon Wetzstein

Portrait image animation enables the post-capture adjustment of these attributes from a single image while maintaining a photorealistic reconstruction of the subject's likeness or identity.

Attribute Generative Adversarial Network +2

BACON: Band-limited Coordinate Networks for Multiscale Scene Representation

1 code implementation CVPR 2022 David B. Lindell, Dave Van Veen, Jeong Joon Park, Gordon Wetzstein

These networks are trained to map continuous input coordinates to the value of a signal at each point.

ACORN: Adaptive Coordinate Networks for Neural Scene Representation

1 code implementation6 May 2021 Julien N. P. Martel, David B. Lindell, Connor Z. Lin, Eric R. Chan, Marco Monteiro, Gordon Wetzstein

Here, we introduce a new hybrid implicit-explicit network architecture and training strategy that adaptively allocates resources during training and inference based on the local complexity of a signal of interest.

3D Shape Representation Representation Learning

Implicit Neural Representations with Periodic Activation Functions

24 code implementations NeurIPS 2020 Vincent Sitzmann, Julien N. P. Martel, Alexander W. Bergman, David B. Lindell, Gordon Wetzstein

However, current network architectures for such implicit neural representations are incapable of modeling signals with fine detail, and fail to represent a signal's spatial and temporal derivatives, despite the fact that these are essential to many physical signals defined implicitly as the solution to partial differential equations.

Image Inpainting

Keyhole Imaging: Non-Line-of-Sight Imaging and Tracking of Moving Objects Along a Single Optical Path

no code implementations13 Dec 2019 Christopher A. Metzler, David B. Lindell, Gordon Wetzstein

Non-line-of-sight (NLOS) imaging and tracking is an emerging technology that allows the shape or position of objects around corners or behind diffusers to be recovered from transient, time-of-flight measurements.

Autonomous Driving

Acoustic Non-Line-Of-Sight Imaging

1 code implementation CVPR 2019 David B. Lindell, Gordon Wetzstein, Vladlen Koltun

Non-line-of-sight (NLOS) imaging enables unprecedented capabilities in a wide range of applications, including robotic and machine vision, remote sensing, autonomous vehicle navigation, and medical imaging.

Seismic Imaging

Reconstructing Transient Images From Single-Photon Sensors

no code implementations CVPR 2017 Matthew O'Toole, Felix Heide, David B. Lindell, Kai Zang, Steven Diamond, Gordon Wetzstein

Computer vision algorithms build on 2D images or 3D videos that capture dynamic events at the millisecond time scale.

Cannot find the paper you are looking for? You can Submit a new open access paper.