Search Results for author: Adam Oberman

Found 14 papers, 4 papers with code

Addressing Sample Inefficiency in Multi-View Representation Learning

no code implementations17 Dec 2023 Kumar Krishna Agrawal, Arna Ghosh, Adam Oberman, Blake Richards

In this work, we provide theoretical insights on the implicit bias of the BarlowTwins and VICReg loss that can explain these heuristics and guide the development of more principled recommendations.

Representation Learning Self-Supervised Learning

EuclidNets: An Alternative Operation for Efficient Inference of Deep Learning Models

no code implementations22 Dec 2022 Xinlin Li, Mariana Parazeres, Adam Oberman, Alireza Ghaffari, Masoud Asgharian, Vahid Partovi Nia

With the advent of deep learning application on edge devices, researchers actively try to optimize their deployments on low-power and restricted memory devices.

Quantization

Score-based Denoising Diffusion with Non-Isotropic Gaussian Noise Models

no code implementations21 Oct 2022 Vikram Voleti, Christopher Pal, Adam Oberman

Generative models based on denoising diffusion techniques have led to an unprecedented increase in the quality and diversity of imagery that is now possible to create with neural generative models.

Denoising Diversity

A Reproducible and Realistic Evaluation of Partial Domain Adaptation Methods

no code implementations3 Oct 2022 Tiago Salvador, Kilian Fatras, Ioannis Mitliagkas, Adam Oberman

In this work, we consider the Partial Domain Adaptation (PDA) variant, where we have extra source classes not present in the target domain.

Model Selection Partial Domain Adaptation +1

On the Generalization of Representations in Reinforcement Learning

1 code implementation1 Mar 2022 Charline Le Lan, Stephen Tu, Adam Oberman, Rishabh Agarwal, Marc G. Bellemare

We complement our theoretical results with an empirical survey of classic representation learning methods from the literature and results on the Arcade Learning Environment, and find that the generalization behaviour of learned representations is well-explained by their effective dimension.

Atari Games reinforcement-learning +3

Multi-Resolution Continuous Normalizing Flows

1 code implementation15 Jun 2021 Vikram Voleti, Chris Finlay, Adam Oberman, Christopher Pal

In this work we introduce a Multi-Resolution variant of such models (MRCNF), by characterizing the conditional distribution over the additional information required to generate a fine image that is consistent with the coarse image.

Ranked #8 on Image Generation on ImageNet 64x64 (Bits per dim metric)

Density Estimation Image Generation

Frustratingly Easy Uncertainty Estimation for Distribution Shift

no code implementations7 Jun 2021 Tiago Salvador, Vikram Voleti, Alexander Iannantuono, Adam Oberman

While the primary goal is to improve accuracy under distribution shift, an important secondary goal is uncertainty estimation: evaluating the probability that the prediction of a model is correct.

Image Classification Unsupervised Domain Adaptation

FairCal: Fairness Calibration for Face Verification

no code implementations ICLR 2022 Tiago Salvador, Stephanie Cairns, Vikram Voleti, Noah Marshall, Adam Oberman

However, they still have drawbacks: they reduce accuracy (AGENDA, PASS, FTC), or require retuning for different false positive rates (FSN).

Attribute Face Recognition +2

A principled approach for generating adversarial images under non-smooth dissimilarity metrics

2 code implementations5 Aug 2019 Aram-Alexandre Pooladian, Chris Finlay, Tim Hoheisel, Adam Oberman

This includes, but is not limited to, $\ell_1, \ell_2$, and $\ell_\infty$ perturbations; the $\ell_0$ counting "norm" (i. e. true sparseness); and the total variation seminorm, which is a (non-$\ell_p$) convolutional dissimilarity measuring local pixel changes.

Adversarial Attack

Improved robustness to adversarial examples using Lipschitz regularization of the loss

1 code implementation ICLR 2019 Chris Finlay, Adam Oberman, Bilal Abbasi

We augment adversarial training (AT) with worst case adversarial training (WCAT) which improves adversarial robustness by 11% over the current state-of-the-art result in the $\ell_2$ norm on CIFAR-10.

Adversarial Robustness

Lipschitz regularized Deep Neural Networks generalize and are adversarially robust

no code implementations28 Aug 2018 Chris Finlay, Jeff Calder, Bilal Abbasi, Adam Oberman

In this work we study input gradient regularization of deep neural networks, and demonstrate that such regularization leads to generalization proofs and improved adversarial robustness.

Adversarial Robustness

Stochastic Backward Euler: An Implicit Gradient Descent Algorithm for $k$-means Clustering

no code implementations21 Oct 2017 Penghang Yin, Minh Pham, Adam Oberman, Stanley Osher

In this paper, we propose an implicit gradient descent algorithm for the classic $k$-means problem.

Clustering

Parle: parallelizing stochastic gradient descent

no code implementations3 Jul 2017 Pratik Chaudhari, Carlo Baldassi, Riccardo Zecchina, Stefano Soatto, Ameet Talwalkar, Adam Oberman

We propose a new algorithm called Parle for parallel training of deep networks that converges 2-4x faster than a data-parallel implementation of SGD, while achieving significantly improved error rates that are nearly state-of-the-art on several benchmarks including CIFAR-10 and CIFAR-100, without introducing any additional hyper-parameters.

Deep Relaxation: partial differential equations for optimizing deep neural networks

no code implementations17 Apr 2017 Pratik Chaudhari, Adam Oberman, Stanley Osher, Stefano Soatto, Guillaume Carlier

In this paper we establish a connection between non-convex optimization methods for training deep neural networks and nonlinear partial differential equations (PDEs).

Cannot find the paper you are looking for? You can Submit a new open access paper.