Search Results for author: Dan Rosenbaum

Found 14 papers, 6 papers with code

Osmosis: RGBD Diffusion Prior for Underwater Image Restoration

no code implementations21 Mar 2024 Opher Bar Nathan, Deborah Levy, Tali treibitz, Dan Rosenbaum

Using this prior together with a novel guidance method based on the underwater image formation model, we generate posterior samples of clean images, removing the water effects.

Underwater Image Restoration

SeaThru-NeRF: Neural Radiance Fields in Scattering Media

1 code implementation CVPR 2023 Deborah Levy, Amit Peleg, Naama Pearl, Dan Rosenbaum, Derya Akkaynak, Simon Korman, Tali treibitz

Even more excitingly, we can render clear views of these scenes, removing the medium between the camera and the scene and reconstructing the appearance and depth of far objects, which are severely occluded by the medium.

Spatial Functa: Scaling Functa to ImageNet Classification and Generation

no code implementations6 Feb 2023 Matthias Bauer, Emilien Dupont, Andy Brock, Dan Rosenbaum, Jonathan Richard Schwarz, Hyunjik Kim

Neural fields, also known as implicit neural representations, have emerged as a powerful means to represent complex signals of various modalities.

Classification Image Generation

From data to functa: Your data point is a function and you can treat it like one

1 code implementation28 Jan 2022 Emilien Dupont, Hyunjik Kim, S. M. Ali Eslami, Danilo Rezende, Dan Rosenbaum

A powerful continuous alternative is then to represent these measurements using an implicit neural representation, a neural function trained to output the appropriate measurement value for any input spatial location.

Imputation Novel View Synthesis

Attentive Neural Processes

7 code implementations ICLR 2019 Hyunjik Kim, andriy mnih, Jonathan Schwarz, Marta Garnelo, Ali Eslami, Dan Rosenbaum, Oriol Vinyals, Yee Whye Teh

Neural Processes (NPs) (Garnelo et al 2018a;b) approach regression by learning to map a context set of observed input-output pairs to a distribution over regression functions.

regression

Learning models for visual 3D localization with implicit mapping

no code implementations4 Jul 2018 Dan Rosenbaum, Frederic Besse, Fabio Viola, Danilo J. Rezende, S. M. Ali Eslami

We consider learning based methods for visual localization that do not require the construction of explicit maps in the form of point clouds or voxels.

Visual Localization

Neural Processes

13 code implementations4 Jul 2018 Marta Garnelo, Jonathan Schwarz, Dan Rosenbaum, Fabio Viola, Danilo J. Rezende, S. M. Ali Eslami, Yee Whye Teh

A neural network (NN) is a parameterised function that can be tuned via gradient descent to approximate a labelled collection of data with high precision.

Beyond Brightness Constancy: Learning Noise Models for Optical Flow

no code implementations11 Apr 2016 Dan Rosenbaum, Yair Weiss

Consistent with current practice, we find that robust versions of gradient constancy are better models than simple brightness constancy but a learned GMM that models the density of patches of warp error gives a much better fit than any existing assumption of constancy.

Denoising Optical Flow Estimation

Statistics of RGBD Images

no code implementations11 Apr 2016 Dan Rosenbaum, Yair Weiss

We then use the generative models together with a degradation model and obtain a Bayes Least Squares (BLS) estimator of the D channel given the RGB channels.

The Return of the Gating Network: Combining Generative Models and Discriminative Training in Natural Image Priors

no code implementations NeurIPS 2015 Dan Rosenbaum, Yair Weiss

In this paper we show how to combine the strengths of both approaches by training a discriminative, feed-forward architecture to predict the state of latent variables in a generative model of natural images.

Image Restoration

Subspace Learning with Partial Information

no code implementations19 Feb 2014 Alon Gonen, Dan Rosenbaum, Yonina Eldar, Shai Shalev-Shwartz

The goal of subspace learning is to find a $k$-dimensional subspace of $\mathbb{R}^d$, such that the expected squared distance between instance vectors and the subspace is as small as possible.

Learning the Local Statistics of Optical Flow

no code implementations NeurIPS 2013 Dan Rosenbaum, Daniel Zoran, Yair Weiss

Motivated by recent progress in natural image statistics, we use newly available datasets with ground truth optical flow to learn the local statistics of optical flow and rigorously compare the learned model to prior models assumed by computer vision optical flow algorithms.

Optical Flow Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.