Search Results for author: Gauthier Gidel

Found 50 papers, 21 papers with code

Synaptic Weight Distributions Depend on the Geometry of Plasticity

no code implementations30 May 2023 Roman Pogodin, Jonathan Cornford, Arna Ghosh, Gauthier Gidel, Guillaume Lajoie, Blake Richards

Overall, this work shows that the current paradigm in theoretical work on synaptic plasticity that assumes Euclidean synaptic geometry may be misguided and that it should be possible to experimentally determine the true geometry of synaptic plasticity in the brain.

Raising the Bar for Certified Adversarial Robustness with Diffusion Models

no code implementations17 May 2023 Thomas Altstidl, David Dobre, Björn Eskofier, Gauthier Gidel, Leo Schwinn

In this work, we demonstrate that a similar approach can substantially improve deterministic certified defenses.

Adversarial Robustness

Sarah Frank-Wolfe: Methods for Constrained Optimization with Best Rates and Practical Features

no code implementations23 Apr 2023 Aleksandr Beznosikov, David Dobre, Gauthier Gidel

Moreover, our second approach does not require either large batches or full deterministic gradients, which is a typical weakness of many techniques for finite-sum problems.

Performative Prediction with Neural Networks

1 code implementation14 Apr 2023 Mehrnaz Mofakhami, Ioannis Mitliagkas, Gauthier Gidel

In this work, we instead assume that the data distribution is Lipschitz continuous with respect to the model's predictions, a more natural assumption for performative systems.

Feature Likelihood Score: Evaluating Generalization of Generative Models Using Samples

1 code implementation9 Feb 2023 Marco Jiralerspong, Avishek Joey Bose, Ian Gemp, Chongli Qin, Yoram Bachrach, Gauthier Gidel

The past few years have seen impressive progress in the development of deep generative models capable of producing high-dimensional, complex, and photo-realistic data.

Density Estimation

When is Momentum Extragradient Optimal? A Polynomial-Based Analysis

no code implementations9 Nov 2022 Junhyung Lyle Kim, Gauthier Gidel, Anastasios Kyrillidis, Fabian Pedregosa

The extragradient method has recently gained increasing attention, due to its convergence behavior on smooth games.

Nesterov Meets Optimism: Rate-Optimal Optimistic-Gradient-Based Method for Stochastic Bilinearly-Coupled Minimax Optimization

no code implementations31 Oct 2022 Chris Junchi Li, Angela Yuan, Gauthier Gidel, Michael I. Jordan

We provide a novel first-order optimization algorithm for bilinearly-coupled strongly-convex-concave minimax optimization called the AcceleratedGradient OptimisticGradient (AG-OG).

Dissecting adaptive methods in GANs

no code implementations9 Oct 2022 Samy Jelassi, David Dobre, Arthur Mensch, Yuanzhi Li, Gauthier Gidel

By considering an update rule with the magnitude of the Adam update and the normalized direction of SGD, we empirically show that the adaptive magnitude of Adam is key for GAN training.

The Curse of Unrolling: Rate of Differentiating Through Optimization

no code implementations27 Sep 2022 Damien Scieur, Quentin Bertrand, Gauthier Gidel, Fabian Pedregosa

Computing the Jacobian of the solution of an optimization problem is a central problem in machine learning, with applications in hyperparameter optimization, meta-learning, optimization as a layer, and dataset distillation, to name a few.

Hyperparameter Optimization Meta-Learning +1

Generating Diverse Vocal Bursts with StyleGAN2 and MEL-Spectrograms

1 code implementation25 Jun 2022 Marco Jiralerspong, Gauthier Gidel

We describe our approach for the generative emotional vocal burst task (ExVo Generate) of the ICML Expressive Vocalizations Competition.


On the Limitations of Elo: Real-World Games, are Transitive, not Additive

1 code implementation21 Jun 2022 Quentin Bertrand, Wojciech Marian Czarnecki, Gauthier Gidel

In this study, we investigate the challenge of identifying the strength of the transitive component in games.

Starcraft Starcraft II

Only Tails Matter: Average-Case Universality and Robustness in the Convex Regime

no code implementations20 Jun 2022 Leonardo Cunha, Gauthier Gidel, Fabian Pedregosa, Damien Scieur, Courtney Paquette

The recently developed average-case analysis of optimization methods allows a more fine-grained and representative convergence analysis than usual worst-case results.

Optimal Extragradient-Based Bilinearly-Coupled Saddle-Point Optimization

no code implementations17 Jun 2022 Simon S. Du, Gauthier Gidel, Michael I. Jordan, Chris Junchi Li

We consider the smooth convex-concave bilinearly-coupled saddle-point problem, $\min_{\mathbf{x}}\max_{\mathbf{y}}~F(\mathbf{x}) + H(\mathbf{x},\mathbf{y}) - G(\mathbf{y})$, where one has access to stochastic first-order oracles for $F$, $G$ as well as the bilinear coupling function $H$.

A General Framework For Proving The Equivariant Strong Lottery Ticket Hypothesis

no code implementations9 Jun 2022 Damien Ferbach, Christos Tsirigotis, Gauthier Gidel, Avishek, Bose

In this paper, we generalize the SLTH to functions that preserve the action of the group $G$ -- i. e. $G$-equivariant network -- and prove, with high probability, that one can approximate any $G$-equivariant network of fixed width and depth by pruning a randomly initialized overparametrized $G$-equivariant network to a $G$-equivariant subnetwork.


Clipped Stochastic Methods for Variational Inequalities with Heavy-Tailed Noise

1 code implementation2 Jun 2022 Eduard Gorbunov, Marina Danilova, David Dobre, Pavel Dvurechensky, Alexander Gasnikov, Gauthier Gidel

In this work, we prove the first high-probability complexity results with logarithmic dependence on the confidence level for stochastic methods for solving monotone and structured non-monotone VIPs with non-sub-Gaussian (heavy-tailed) noise and unbounded domains.

Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker Assumptions and Communication Compression as a Cherry on the Top

1 code implementation1 Jun 2022 Eduard Gorbunov, Samuel Horváth, Peter Richtárik, Gauthier Gidel

However, many fruitful directions, such as the usage of variance reduction for achieving robustness and communication compression for reducing communication costs, remain weakly explored in the field.

Federated Learning

Beyond L1: Faster and Better Sparse Models with skglm

2 code implementations16 Apr 2022 Quentin Bertrand, Quentin Klopfenstein, Pierre-Antoine Bannier, Gauthier Gidel, Mathurin Massias

We propose a new fast algorithm to estimate any sparse generalized linear model with convex or non-convex separable penalties.

Stochastic Extragradient: General Analysis and Improved Rates

1 code implementation16 Nov 2021 Eduard Gorbunov, Hugo Berard, Gauthier Gidel, Nicolas Loizou

The Stochastic Extragradient (SEG) method is one of the most popular algorithms for solving min-max optimization and variational inequalities problems (VIP) appearing in various machine learning tasks.

Generating Diverse Realistic Laughter for Interactive Art

no code implementations4 Nov 2021 M. Mehdi Afsar, Eric Park, Étienne Paquette, Gauthier Gidel, Kory W. Mathewson, Eilif Muller

We propose an interactive art project to make those rendered invisible by the COVID-19 crisis and its concomitant solitude reappear through the welcome melody of laughter, and connections created and explored through advanced laughter synthesis approaches.

Convergence Analysis and Implicit Regularization of Feedback Alignment for Deep Linear Networks

no code implementations20 Oct 2021 Manuela Girotti, Ioannis Mitliagkas, Gauthier Gidel

We theoretically analyze the Feedback Alignment (FA) algorithm, an efficient alternative to backpropagation for training neural networks.

Incremental Learning

Extragradient Method: $O(1/K)$ Last-Iterate Convergence for Monotone Variational Inequalities and Connections With Cocoercivity

1 code implementation8 Oct 2021 Eduard Gorbunov, Nicolas Loizou, Gauthier Gidel

In this paper, we resolve one of such questions and derive the first last-iterate $O(1/K)$ convergence rate for EG for monotone and Lipschitz VIP without any additional assumptions on the operator unlike the only known result of this type (Golowich et al., 2020) that relies on the Lipschitzness of the Jacobian of the operator.

Pick Your Battles: Interaction Graphs as Population-Level Objectives for Strategic Diversity

no code implementations8 Oct 2021 Marta Garnelo, Wojciech Marian Czarnecki, SiQi Liu, Dhruva Tirumala, Junhyuk Oh, Gauthier Gidel, Hado van Hasselt, David Balduzzi

Strategic diversity is often essential in games: in multi-player games, for example, evaluating a player against a diverse set of strategies will yield a more accurate estimate of its performance.

A Distributional Robustness Perspective on Adversarial Training with the $\infty$-Wasserstein Distance

no code implementations29 Sep 2021 Chiara Regniez, Gauthier Gidel, Hugo Berard

We show a formal connection between our formulation and optimal transport by relaxing AT into DRO problem with an $\infty$-Wasserstein constraint.

Adam is no better than normalized SGD: Dissecting how adaptivity improves GAN performance

no code implementations29 Sep 2021 Samy Jelassi, Arthur Mensch, Gauthier Gidel, Yuanzhi Li

We empirically show that SGDA with the same vector norm as Adam reaches similar or even better performance than the latter.

Generalized Natural Gradient Flows in Hidden Convex-Concave Games and GANs

no code implementations ICLR 2022 Andjela Mladenovic, Iosif Sakos, Gauthier Gidel, Georgios Piliouras

In the case of Fisher information geometry, we provide a complete picture of the dynamics in an interesting special setting of team competition via invariant function analysis.

Stochastic Gradient Descent-Ascent and Consensus Optimization for Smooth Games: Convergence Analysis under Expected Co-coercivity

1 code implementation NeurIPS 2021 Nicolas Loizou, Hugo Berard, Gauthier Gidel, Ioannis Mitliagkas, Simon Lacoste-Julien

Two of the most prominent algorithms for solving unconstrained smooth games are the classical stochastic gradient descent-ascent (SGDA) and the recently introduced stochastic consensus optimization (SCO) [Mescheder et al., 2017].

On the Convergence of Stochastic Extragradient for Bilinear Games using Restarted Iteration Averaging

no code implementations30 Jun 2021 Chris Junchi Li, Yaodong Yu, Nicolas Loizou, Gauthier Gidel, Yi Ma, Nicolas Le Roux, Michael I. Jordan

We study the stochastic bilinear minimax optimization problem, presenting an analysis of the same-sample Stochastic ExtraGradient (SEG) method with constant step size, and presenting variations of the method that yield favorable convergence.

A single gradient step finds adversarial examples on random two-layers neural networks

no code implementations NeurIPS 2021 Sébastien Bubeck, Yeshwanth Cherapanamjeri, Gauthier Gidel, Rémi Tachet des Combes

Daniely and Schacham recently showed that gradient descent finds adversarial examples on random undercomplete two-layers ReLU neural networks.

Online Adversarial Attacks

1 code implementation ICLR 2022 Andjela Mladenovic, Avishek Joey Bose, Hugo Berard, William L. Hamilton, Simon Lacoste-Julien, Pascal Vincent, Gauthier Gidel

Adversarial attacks expose important vulnerabilities of deep learning models, yet little attention has been paid to settings where data arrives as a stream.

Adversarial Attack

Adversarial Example Games

1 code implementation NeurIPS 2020 Avishek Joey Bose, Gauthier Gidel, Hugo Berard, Andre Cianflone, Pascal Vincent, Simon Lacoste-Julien, William L. Hamilton

We introduce Adversarial Example Games (AEG), a framework that models the crafting of adversarial examples as a min-max game between a generator of attacks and a classifier.

A Limited-Capacity Minimax Theorem for Non-Convex Games or: How I Learned to Stop Worrying about Mixed-Nash and Love Neural Nets

no code implementations14 Feb 2020 Gauthier Gidel, David Balduzzi, Wojciech Marian Czarnecki, Marta Garnelo, Yoram Bachrach

Adversarial training, a special case of multi-objective optimization, is an increasingly prevalent machine learning technique: some of its most notable applications include GAN-based generative modeling and self-play techniques in reinforcement learning which have been applied to complex games such as Go or Poker.

Starcraft Starcraft II

Finite Regret and Cycles with Fixed Step-Size via Alternating Gradient Descent-Ascent

no code implementations9 Jul 2019 James P. Bailey, Gauthier Gidel, Georgios Piliouras

Gradient descent is arguably one of the most popular online optimization methods with a wide array of applications.

Computer Science and Game Theory Dynamical Systems Optimization and Control

Linear Lower Bounds and Conditioning of Differentiable Games

no code implementations ICML 2020 Adam Ibrahim, Waïss Azizian, Gauthier Gidel, Ioannis Mitliagkas

In this work, we approach the question of fundamental iteration complexity by providing lower bounds to complement the linear (i. e. geometric) upper bounds observed in the literature on a wide class of problems.

A Tight and Unified Analysis of Gradient-Based Methods for a Whole Spectrum of Games

no code implementations13 Jun 2019 Waïss Azizian, Ioannis Mitliagkas, Simon Lacoste-Julien, Gauthier Gidel

We provide new analyses of the EG's local and global convergence properties and use is to get a tighter global convergence rate for OG and CO. Our analysis covers the whole range of settings between bilinear and strongly monotone games.

A Closer Look at the Optimization Landscapes of Generative Adversarial Networks

1 code implementation ICLR 2020 Hugo Berard, Gauthier Gidel, Amjad Almahairi, Pascal Vincent, Simon Lacoste-Julien

Generative adversarial networks have been very successful in generative modeling, however they remain relatively challenging to train compared to standard deep neural networks.

Non-normal Recurrent Neural Network (nnRNN): learning long time dependencies while improving expressivity with transient dynamics

1 code implementation NeurIPS 2019 Giancarlo Kerg, Kyle Goyette, Maximilian Puelma Touzel, Gauthier Gidel, Eugene Vorontsov, Yoshua Bengio, Guillaume Lajoie

A recent strategy to circumvent the exploding and vanishing gradient problem in RNNs, and to allow the stable propagation of signals over long time scales, is to constrain recurrent connectivity matrices to be orthogonal or unitary.

Implicit Regularization of Discrete Gradient Dynamics in Linear Neural Networks

1 code implementation NeurIPS 2019 Gauthier Gidel, Francis Bach, Simon Lacoste-Julien

When optimizing over-parameterized models, such as deep neural networks, a large set of parameters can achieve zero training error.

Reducing Noise in GAN Training with Variance Reduced Extragradient

no code implementations NeurIPS 2019 Tatjana Chavdarova, Gauthier Gidel, François Fleuret, Simon Lacoste-Julien

We study the effect of the stochastic gradient noise on the training of generative adversarial networks (GANs) and show that it can prevent the convergence of standard game optimization methods, while the batch version converges.

Negative Momentum for Improved Game Dynamics

1 code implementation12 Jul 2018 Gauthier Gidel, Reyhane Askari Hemmat, Mohammad Pezeshki, Remi Lepriol, Gabriel Huang, Simon Lacoste-Julien, Ioannis Mitliagkas

Games generalize the single-objective optimization paradigm by introducing different objective functions for different players.

Frank-Wolfe Splitting via Augmented Lagrangian Method

no code implementations9 Apr 2018 Gauthier Gidel, Fabian Pedregosa, Simon Lacoste-Julien

In this work, we develop and analyze the Frank-Wolfe Augmented Lagrangian (FW-AL) algorithm, a method for minimizing a smooth function over convex compact sets related by a "linear consistency" constraint that only requires access to a linear minimization oracle over the individual constraints.

Adaptive Three Operator Splitting

no code implementations ICML 2018 Fabian Pedregosa, Gauthier Gidel

We propose and analyze an adaptive step-size variant of the Davis-Yin three operator splitting.

A Variational Inequality Perspective on Generative Adversarial Networks

1 code implementation ICLR 2019 Gauthier Gidel, Hugo Berard, Gaëtan Vignoud, Pascal Vincent, Simon Lacoste-Julien

Generative adversarial networks (GANs) form a generative modeling approach known for producing appealing samples, but they are notably difficult to train.


Parametric Adversarial Divergences are Good Losses for Generative Modeling

no code implementations ICLR 2018 Gabriel Huang, Hugo Berard, Ahmed Touati, Gauthier Gidel, Pascal Vincent, Simon Lacoste-Julien

Parametric adversarial divergences, which are a generalization of the losses used to train generative adversarial networks (GANs), have often been described as being approximations of their nonparametric counterparts, such as the Jensen-Shannon divergence, which can be derived under the so-called optimal discriminator assumption.

Structured Prediction

Frank-Wolfe Algorithms for Saddle Point Problems

1 code implementation25 Oct 2016 Gauthier Gidel, Tony Jebara, Simon Lacoste-Julien

We extend the Frank-Wolfe (FW) optimization algorithm to solve constrained smooth convex-concave saddle point (SP) problems.

Structured Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.