Search Results for author: Martin Arjovsky

Found 15 papers, 9 papers with code

Why does Throwing Away Data Improve Worst-Group Error?

no code implementations23 May 2022 Kamalika Chaudhuri, Kartik Ahuja, Martin Arjovsky, David Lopez-Paz

When facing data with imbalanced classes or groups, practitioners follow an intriguing strategy to achieve best results.

Fairness imbalanced classification +1

Out of Distribution Generalization in Machine Learning

1 code implementation3 Mar 2021 Martin Arjovsky

A central topic in the thesis is the strong link between discovering the causal structure of the data, finding features that are reliable (when using them to predict) regardless of their context, and out of distribution generalization.

BIG-bench Machine Learning Out-of-Distribution Generalization

Linear unit-tests for invariance discovery

2 code implementations22 Feb 2021 Benjamin Aubin, Agnieszka Słowik, Martin Arjovsky, Leon Bottou, David Lopez-Paz

There is an increasing interest in algorithms to learn invariant correlations across training environments.

Out-of-Distribution Generalization

Low Distortion Block-Resampling with Spatially Stochastic Networks

no code implementations NeurIPS 2020 Sarah Jane Hong, Martin Arjovsky, Darryl Barnhart, Ian Thompson

We formalize and attack the problem of generating new images from old ones that are as diverse as possible, only allowing them to change without restrictions in certain parts of the image while remaining globally consistent.

Symplectic Recurrent Neural Networks

1 code implementation ICLR 2020 Zhengdao Chen, Jianyu Zhang, Martin Arjovsky, Léon Bottou

We propose Symplectic Recurrent Neural Networks (SRNNs) as learning algorithms that capture the dynamics of physical systems from observed trajectories.

Invariant Risk Minimization

14 code implementations5 Jul 2019 Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, David Lopez-Paz

We introduce Invariant Risk Minimization (IRM), a learning paradigm to estimate invariant correlations across multiple training distributions.

Domain Generalization Image Classification +1

Geometrical Insights for Implicit Generative Modeling

no code implementations21 Dec 2017 Leon Bottou, Martin Arjovsky, David Lopez-Paz, Maxime Oquab

Learning algorithms for implicit generative models can optimize a variety of criteria that measure how the data distribution differs from the implicit model distribution, including the Wasserstein distance, the Energy distance, and the Maximum Mean Discrepancy criterion.

Wasserstein Generative Adversarial Networks

no code implementations ICML 2017 Martin Arjovsky, Soumith Chintala, Léon Bottou

We introduce a new algorithm named WGAN, an alternative to traditional GAN training.

Wasserstein GAN

120 code implementations26 Jan 2017 Martin Arjovsky, Soumith Chintala, Léon Bottou

We introduce a new algorithm named WGAN, an alternative to traditional GAN training.

Image Generation Synthetic Data Generation

Towards Principled Methods for Training Generative Adversarial Networks

no code implementations17 Jan 2017 Martin Arjovsky, Léon Bottou

The goal of this paper is not to introduce a single algorithm or method, but to make theoretical steps towards fully understanding the training dynamics of generative adversarial networks.

Adversarially Learned Inference

9 code implementations2 Jun 2016 Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Olivier Mastropietro, Alex Lamb, Martin Arjovsky, Aaron Courville

We introduce the adversarially learned inference (ALI) model, which jointly learns a generation network and an inference network using an adversarial process.

Image-to-Image Translation

Unitary Evolution Recurrent Neural Networks

2 code implementations20 Nov 2015 Martin Arjovsky, Amar Shah, Yoshua Bengio

When the eigenvalues of the hidden to hidden weight matrix deviate from absolute value 1, optimization becomes difficult due to the well studied issue of vanishing and exploding gradients, especially when trying to learn long-term dependencies.

Sequential Image Classification

Saddle-free Hessian-free Optimization

no code implementations30 May 2015 Martin Arjovsky

Nonconvex optimization problems such as the ones in training deep neural networks suffer from a phenomenon called saddle point proliferation.

Second-order methods

Cannot find the paper you are looking for? You can Submit a new open access paper.