Search Results for author: David Fleet

Found 9 papers, 4 papers with code

Denoising Diffusion Probabilistic Models for Robust Image Super-Resolution in the Wild

no code implementations15 Feb 2023 Hshmat Sahak, Daniel Watson, Chitwan Saharia, David Fleet

Diffusion models have shown promising results on single-image super-resolution and other image- to-image translation tasks.

Blind Super-Resolution Denoising +2

Scalable Adaptive Computation for Iterative Generation

2 code implementations22 Dec 2022 Allan Jabri, David Fleet, Ting Chen

We show how to leverage recurrence by conditioning the latent tokens at each forward pass of the reverse diffusion process with those from prior computation, i. e. latent self-conditioning.

Image Generation Video Generation +1

Disentangling Architecture and Training for Optical Flow

no code implementations21 Mar 2022 Deqing Sun, Charles Herrmann, Fitsum Reda, Michael Rubinstein, David Fleet, William T. Freeman

Our newly trained RAFT achieves an Fl-all score of 4. 31% on KITTI 2015, more accurate than all published optical flow methods at the time of writing.

Optical Flow Estimation

Differentiable probabilistic models of scientific imaging with the Fourier slice theorem

1 code implementation18 Jun 2019 Karen Ullrich, Rianne van den Berg, Marcus Brubaker, David Fleet, Max Welling

Finally, we demonstrate how the reconstruction algorithm can be extended with an amortized inference scheme on unknown attributes such as object pose.

3D Reconstruction Computational Efficiency +3

TzK: Flow-Based Conditional Generative Model

no code implementations5 Feb 2019 Micha Livne, David Fleet

We formulate a new class of conditional generative models based on probability flows.

Attribute

On the effectiveness of task granularity for transfer learning

1 code implementation24 Apr 2018 Farzaneh Mahdisoltani, Guillaume Berger, Waseem Gharbieh, David Fleet, Roland Memisevic

We describe a DNN for video classification and captioning, trained end-to-end, with shared features, to solve tasks at different levels of granularity, exploring the link between granularity in a source task and the quality of learned features for transfer learning.

Classification General Classification +2

Subspace Selection to Suppress Confounding Source Domain Information in AAM Transfer Learning

no code implementations28 Aug 2017 Azin Asgarian, Ahmed Bilal Ashraf, David Fleet, Babak Taati

We propose a subspace transfer learning method, in which we select a subspace from the source that best describes the target space.

Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.