Search Results for author: Yair Weiss

Found 25 papers, 10 papers with code

Intriguing Properties of Modern GANs

no code implementations21 Feb 2024 Roy Friedman, Yair Weiss

This has led many to believe that ``GANs capture the training data manifold''.

What do CNNs Learn in the First Layer and Why? A Linear Systems Perspective

1 code implementation6 Jun 2022 Rhea Chowers, Yair Weiss

It has previously been reported that the representation that is learned in the first layer of deep Convolutional Neural Networks (CNNs) is highly consistent across initializations and architectures.

Generating natural images with direct Patch Distributions Matching

2 code implementations22 Mar 2022 Ariel Elnekave, Yair Weiss

On a number of image generation tasks we show that our results are often superior to single-image-GANs, require no training, and can generate high quality images in a few seconds.

Image Generation

When Is Unsupervised Disentanglement Possible?

no code implementations NeurIPS 2021 Daniella Horan, Eitan Richardson, Yair Weiss

In this paper, we show that the assumption of local isometry together with non-Gaussianity of the factors, is sufficient to provably recover disentangled representations from data.

Disentanglement

Understanding and Simplifying Perceptual Distances

no code implementations CVPR 2021 Dan Amir, Yair Weiss

Perceptual metrics based on features of deep Convolutional Neural Networks (CNNs) have shown remarkable success when used as loss functions in a range of computer vision problems and significantly outperform classical losses such as L1 or L2 in pixel space.

Perceptual Distance

Posterior Sampling for Image Restoration using Explicit Patch Priors

1 code implementation20 Apr 2021 Roy Friedman, Yair Weiss

Almost all existing methods for image restoration are based on optimizing the mean squared error (MSE), even though it is known that the best estimate in terms of MSE may yield a highly atypical image due to the fact that there are many plausible restorations for a given noisy image.

Image Restoration

A Bayes-Optimal View on Adversarial Examples

no code implementations20 Feb 2020 Eitan Richardson, Yair Weiss

Since the discovery of adversarial examples - the ability to fool modern CNN classifiers with tiny perturbations of the input, there has been much discussion whether they are a "bug" that is specific to current neural architectures and training methods or an inevitable "feature" of high dimensional geometry.

Adversarial Attack

Weak lensing shear estimation beyond the shape-noise limit: a machine learning approach

1 code implementation22 Aug 2018 Ofer M. Springer, Eran O. Ofek, Yair Weiss, Julian Merten

In this work we report on our initial attempt to reduce statistical errors in weak lensing shear estimation using a machine learning approach -- training a multi-layered convolutional neural network to directly estimate the shear given an observed background galaxy image.

Cosmology and Nongalactic Astrophysics

On GANs and GMMs

3 code implementations NeurIPS 2018 Eitan Richardson, Yair Weiss

While GMMs have previously been shown to be successful in modeling small patches of images, we show how to train them on full sized images despite the high dimensionality.

Image Generation

Why do deep convolutional networks generalize so poorly to small image transformations?

4 code implementations ICLR 2019 Aharon Azulay, Yair Weiss

Convolutional Neural Networks (CNNs) are commonly assumed to be invariant to small image transformations: either because of the convolutional architecture or because they were trained using data augmentation.

Data Augmentation Object Recognition

Reflection Separation Using Guided Annotation

1 code implementation20 Feb 2017 Ofer Springer, Yair Weiss

Photographs taken through a glass surface often contain an approximately linear superposition of reflected and transmitted layers.

A Tight Convex Upper Bound on the Likelihood of a Finite Mixture

no code implementations18 Aug 2016 Elad Mezuman, Yair Weiss

The likelihood function of a finite mixture model is a non-convex function with multiple local maxima and commonly used iterative algorithms such as EM will converge to different solutions depending on initial conditions.

Beyond Brightness Constancy: Learning Noise Models for Optical Flow

no code implementations11 Apr 2016 Dan Rosenbaum, Yair Weiss

Consistent with current practice, we find that robust versions of gradient constancy are better models than simple brightness constancy but a learned GMM that models the density of patches of warp error gives a much better fit than any existing assumption of constancy.

Denoising Optical Flow Estimation

Statistics of RGBD Images

no code implementations11 Apr 2016 Dan Rosenbaum, Yair Weiss

We then use the generative models together with a degradation model and obtain a Bayes Least Squares (BLS) estimator of the D channel given the RGB channels.

The Return of the Gating Network: Combining Generative Models and Discriminative Training in Natural Image Priors

no code implementations NeurIPS 2015 Dan Rosenbaum, Yair Weiss

In this paper we show how to combine the strengths of both approaches by training a discriminative, feed-forward architecture to predict the state of latent variables in a generative model of natural images.

Image Restoration

Learning the Local Statistics of Optical Flow

no code implementations NeurIPS 2013 Dan Rosenbaum, Daniel Zoran, Yair Weiss

Motivated by recent progress in natural image statistics, we use newly available datasets with ground truth optical flow to learn the local statistics of optical flow and rigorously compare the learned model to prior models assumed by computer vision optical flow algorithms.

Optical Flow Estimation

Tighter Linear Program Relaxations for High Order Graphical Models

no code implementations26 Sep 2013 Elad Mezuman, Daniel Tarlow, Amir Globerson, Yair Weiss

In this work, we study the LP relaxations that result from enforcing additional consistency constraints between the HOP and the rest of the model.

Vocal Bursts Intensity Prediction

Loopy Belief Propagation for Approximate Inference: An Empirical Study

1 code implementation23 Jan 2013 Kevin Murphy, Yair Weiss, Michael. I. Jordan

Recently, researchers have demonstrated that loopy belief propagation - the use of Pearls polytree algorithm IN a Bayesian network WITH loops OF error- correcting codes. The most dramatic instance OF this IS the near Shannon - limit performance OF Turbo Codes codes whose decoding algorithm IS equivalent TO loopy belief propagation IN a chain - structured Bayesian network.

Learning about Canonical Views from Internet Image Collections

no code implementations NeurIPS 2012 Elad Mezuman, Yair Weiss

Our results clearly show that the most likely view in the search engine corresponds to the same view preferred by human subjects in experiments.

Object Recognition

Natural Images, Gaussian Mixtures and Dead Leaves

no code implementations NeurIPS 2012 Daniel Zoran, Yair Weiss

Simple Gaussian Mixture Models (GMMs) learned from pixels of natural image patches have been recently shown to be surprisingly strong performers in modeling the statistics of natural images.

Denoising

Semi-Supervised Learning in Gigantic Image Collections

no code implementations NeurIPS 2009 Rob Fergus, Yair Weiss, Antonio Torralba

With the advent of the Internet it is now possible to collect hundreds of millions of images.

Spectral Hashing

no code implementations NeurIPS 2008 Yair Weiss, Antonio Torralba, Rob Fergus

Semantic hashing seeks compact binary codes of datapoints so that the Hamming distance between codewords correlates with semantic similarity.

graph partitioning Semantic Similarity +1

Deriving intrinsic images from image sequences

1 code implementation International Conference on Computer vision 2001 Yair Weiss

We focus on a slightly, easier problem: given a sequence of T images where the reflectance is constant and the illumination changes, can we recover T illumination images and a single reflectance image?

Cannot find the paper you are looking for? You can Submit a new open access paper.