Search Results for author: Michael Arbel

Found 25 papers, 15 papers with code

Efficient and principled score estimation with Nyström kernel exponential families

1 code implementation23 May 2017 Danica J. Sutherland, Heiko Strathmann, Michael Arbel, Arthur Gretton

We propose a fast method with statistical guarantees for learning an exponential family density model where the natural parameter is in a reproducing kernel Hilbert space, and may be infinite-dimensional.

Computational Efficiency Denoising +1

Kernel Conditional Exponential Family

1 code implementation15 Nov 2017 Michael Arbel, Arthur Gretton

A nonparametric family of conditional distributions is introduced, which generalizes conditional exponential families using functional parameters in a suitable RKHS.

Demystifying MMD GANs

7 code implementations ICLR 2018 Mikołaj Bińkowski, Danica J. Sutherland, Michael Arbel, Arthur Gretton

We investigate the training and performance of generative adversarial networks using the Maximum Mean Discrepancy (MMD) as critic, termed MMD GANs.

On gradient regularizers for MMD GANs

1 code implementation NeurIPS 2018 Michael Arbel, Danica J. Sutherland, Mikołaj Bińkowski, Arthur Gretton

We propose a principled method for gradient-based regularization of the critic of GAN-like models trained by adversarially optimizing the kernel of a Maximum Mean Discrepancy (MMD).

Image Generation

Maximum Mean Discrepancy Gradient Flow

1 code implementation NeurIPS 2019 Michael Arbel, Anna Korba, Adil Salim, Arthur Gretton

We construct a Wasserstein gradient flow of the maximum mean discrepancy (MMD) and study its convergence properties.

Kernelized Wasserstein Natural Gradient

1 code implementation ICLR 2020 Michael Arbel, Arthur Gretton, Wuchen Li, Guido Montufar

Many machine learning problems can be expressed as the optimization of some cost functional over a parametric family of probability distributions.

Generalized Energy Based Models

1 code implementation ICLR 2021 Michael Arbel, Liang Zhou, Arthur Gretton

We show that both training stages are well-defined: the energy is learned by maximising a generalized likelihood, and the resulting energy-based loss provides informative gradients for learning the base.

Image Generation

Synchronizing Probability Measures on Rotations via Optimal Transport

no code implementations CVPR 2020 Tolga Birdal, Michael Arbel, Umut Şimşekli, Leonidas Guibas

We introduce a new paradigm, $\textit{measure synchronization}$, for synchronizing graphs with measure-valued edges.

Pose Estimation

A Non-Asymptotic Analysis for Stein Variational Gradient Descent

no code implementations NeurIPS 2020 Anna Korba, Adil Salim, Michael Arbel, Giulia Luise, Arthur Gretton

We study the Stein Variational Gradient Descent (SVGD) algorithm, which optimises a set of particles to approximate a target probability distribution $\pi\propto e^{-V}$ on $\mathbb{R}^d$.

LEMMA

Estimating Barycenters of Measures in High Dimensions

no code implementations14 Jul 2020 Samuel Cohen, Michael Arbel, Marc Peter Deisenroth

Barycentric averaging is a principled way of summarizing populations of measures.

Vocal Bursts Intensity Prediction

Efficient Wasserstein Natural Gradients for Reinforcement Learning

1 code implementation ICLR 2021 Ted Moskovitz, Michael Arbel, Ferenc Huszar, Arthur Gretton

A novel optimization approach is proposed for application to policy gradient methods and evolution strategies for reinforcement learning (RL).

Policy Gradient Methods reinforcement-learning +1

The Unreasonable Effectiveness of Patches in Deep Convolutional Kernels Methods.

no code implementations ICLR 2021 Louis Thiry, Michael Arbel, Eugene Belilovsky, Edouard Oyallon

A recent line of work showed that various forms of convolutional kernel methods can be competitive with standard supervised deep convolutional networks on datasets like CIFAR-10, obtaining accuracies in the range of 87-90% while being more amenable to theoretical analysis.

Object Recognition Representation Learning

The Unreasonable Effectiveness of Patches in Deep Convolutional Kernels Methods

1 code implementation19 Jan 2021 Louis Thiry, Michael Arbel, Eugene Belilovsky, Edouard Oyallon

A recent line of work showed that various forms of convolutional kernel methods can be competitive with standard supervised deep convolutional networks on datasets like CIFAR-10, obtaining accuracies in the range of 87-90% while being more amenable to theoretical analysis.

Object Recognition Representation Learning

Annealed Flow Transport Monte Carlo

3 code implementations15 Feb 2021 Michael Arbel, Alexander G. D. G. Matthews, Arnaud Doucet

Annealed Importance Sampling (AIS) and its Sequential Monte Carlo (SMC) extensions are state-of-the-art methods for estimating normalizing constants of probability distributions.

KALE Flow: A Relaxed KL Gradient Flow for Probabilities with Disjoint Support

1 code implementation NeurIPS 2021 Pierre Glaser, Michael Arbel, Arthur Gretton

We study the gradient flow for a relaxed approximation to the Kullback-Leibler (KL) divergence between a moving source and a fixed target distribution.

Towards an Understanding of Default Policies in Multitask Policy Optimization

no code implementations4 Nov 2021 Ted Moskovitz, Michael Arbel, Jack Parker-Holder, Aldo Pacchiano

Much of the recent success of deep reinforcement learning has been driven by regularized policy optimization (RPO) algorithms with strong performance across multiple domains.

Amortized Implicit Differentiation for Stochastic Bilevel Optimization

no code implementations ICLR 2022 Michael Arbel, Julien Mairal

We study a class of algorithms for solving bilevel optimization problems in both stochastic and deterministic settings when the inner-level objective is strongly convex.

Bilevel Optimization

Continual Repeated Annealed Flow Transport Monte Carlo

2 code implementations31 Jan 2022 Alexander G. D. G. Matthews, Michael Arbel, Danilo J. Rezende, Arnaud Doucet

We propose Continual Repeated Annealed Flow Transport Monte Carlo (CRAFT), a method that combines a sequential Monte Carlo (SMC) sampler (itself a generalization of Annealed Importance Sampling) with variational inference using normalizing flows.

Variational Inference

Maximum Likelihood Learning of Unnormalized Models for Simulation-Based Inference

1 code implementation26 Oct 2022 Pierre Glaser, Michael Arbel, Samo Hromadka, Arnaud Doucet, Arthur Gretton

We introduce two synthetic likelihood methods for Simulation-Based Inference (SBI), to conduct either amortized or targeted inference from experimental observations when a high-fidelity simulator is available.

Rethinking Gauss-Newton for learning over-parameterized models

no code implementations NeurIPS 2023 Michael Arbel, Romain Menegaux, Pierre Wolinski

This work studies the global convergence and implicit bias of Gauss Newton's (GN) when optimizing over-parameterized one-hidden layer networks in the mean-field regime.

SLACK: Stable Learning of Augmentations with Cold-start and KL regularization

no code implementations CVPR 2023 Juliette Marrie, Michael Arbel, Diane Larlus, Julien Mairal

Data augmentation is known to improve the generalization capabilities of neural networks, provided that the set of transformations is chosen with care, a selection often performed manually.

Bilevel Optimization Data Augmentation

MLXP: A framework for conducting replicable Machine Learning eXperiments in Python

1 code implementation21 Feb 2024 Michael Arbel, Alexandre Zouaoui

Replicability in machine learning (ML) research is increasingly concerning due to the utilization of complex non-deterministic algorithms and the dependence on numerous hyper-parameter choices, such as model architecture and training datasets.

Management

Functional Bilevel Optimization for Machine Learning

no code implementations29 Mar 2024 Ieva Petrulionyte, Julien Mairal, Michael Arbel

In this paper, we introduce a new functional point of view on bilevel optimization problems for machine learning, where the inner objective is minimized over a function space.

Bilevel Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.