Search Results for author: Aurelien Lucchi

Found 65 papers, 25 papers with code

SDEs for Minimax Optimization

1 code implementation19 Feb 2024 Enea Monzio Compagnoni, Antonio Orvieto, Hans Kersting, Frank Norbert Proske, Aurelien Lucchi

Minimax optimization problems have attracted a lot of attention over the past few years, with applications ranging from economics to machine learning.

Characterizing Overfitting in Kernel Ridgeless Regression Through the Eigenspectrum

no code implementations2 Feb 2024 Tin Sum Cheng, Aurelien Lucchi, Anastasis Kratsios, David Belius

We derive new bounds for the condition number of kernel matrices, which we then use to enhance existing non-asymptotic test error bounds for kernel ridgeless regression in the over-parameterized regime for a fixed input dimension.

regression

Regret-Optimal Federated Transfer Learning for Kernel Regression with Applications in American Option Pricing

1 code implementation8 Sep 2023 Xuwei Yang, Anastasis Kratsios, Florian Krach, Matheus Grasselli, Aurelien Lucchi

We propose an optimal iterative scheme for federated transfer learning, where a central planner has access to datasets ${\cal D}_1,\dots,{\cal D}_N$ for the same learning model $f_{\theta}$.

Adversarial Robustness regression +1

Initial Guessing Bias: How Untrained Networks Favor Some Classes

no code implementations1 Jun 2023 Emanuele Francazi, Aurelien Lucchi, Marco Baity-Jesi

Understanding and controlling biasing effects in neural networks is crucial for ensuring accurate and fair model performance.

An SDE for Modeling SAM: Theory and Insights

no code implementations19 Jan 2023 Enea Monzio Compagnoni, Luca Biggio, Antonio Orvieto, Frank Norbert Proske, Hans Kersting, Aurelien Lucchi

We study the SAM (Sharpness-Aware Minimization) optimizer which has recently attracted a lot of interest due to its increased performance over more classical variants of stochastic gradient descent.

Mastering Spatial Graph Prediction of Road Networks

no code implementations ICCV 2023 Sotiris Anagnostidis, Aurelien Lucchi, Thomas Hofmann

Accurately predicting road networks from satellite images requires a global understanding of the network topology.

Reinforcement Learning (RL)

A Theoretical Analysis of the Learning Dynamics under Class Imbalance

1 code implementation1 Jul 2022 Emanuele Francazi, Marco Baity-Jesi, Aurelien Lucchi

We find that GD is not guaranteed to decrease the loss for each class but that this problem can be addressed by performing a per-class normalization of the gradient.

Signal Propagation in Transformers: Theoretical Perspectives and the Role of Rank Collapse

no code implementations7 Jun 2022 Lorenzo Noci, Sotiris Anagnostidis, Luca Biggio, Antonio Orvieto, Sidak Pal Singh, Aurelien Lucchi

First, we show that rank collapse of the tokens' representations hinders training by causing the gradients of the queries and keys to vanish at initialization.

Phenomenology of Double Descent in Finite-Width Neural Networks

no code implementations ICLR 2022 Sidak Pal Singh, Aurelien Lucchi, Thomas Hofmann, Bernhard Schölkopf

`Double descent' delineates the generalization behaviour of models depending on the regime they belong to: under- or over-parameterized.

A Globally Convergent Evolutionary Strategy for Stochastic Constrained Optimization with Applications to Reinforcement Learning

no code implementations21 Feb 2022 Youssef Diouane, Aurelien Lucchi, Vihang Patil

Evolutionary strategies have recently been shown to achieve competing levels of performance for complex optimization problems in reinforcement learning.

Anticorrelated Noise Injection for Improved Generalization

no code implementations6 Feb 2022 Antonio Orvieto, Hans Kersting, Frank Proske, Francis Bach, Aurelien Lucchi

Injecting artificial noise into gradient descent (GD) is commonly employed to improve the performance of machine learning models.

BIG-bench Machine Learning

Faster Single-loop Algorithms for Minimax Optimization without Strong Concavity

1 code implementation10 Dec 2021 Junchi Yang, Antonio Orvieto, Aurelien Lucchi, Niao He

Gradient descent ascent (GDA), the simplest single-loop algorithm for nonconvex minimax optimization, is widely used in practical applications such as generative adversarial networks (GANs) and adversarial training.

On the Second-order Convergence Properties of Random Search Methods

1 code implementation NeurIPS 2021 Aurelien Lucchi, Antonio Orvieto, Adamos Solomou

We prove that this approach converges to a second-order stationary point at a much faster rate than vanilla methods: namely, the complexity in terms of the number of function evaluations is only linear in the problem dimension.

Neural Symbolic Regression that Scales

2 code implementations11 Jun 2021 Luca Biggio, Tommaso Bendinelli, Alexander Neitz, Aurelien Lucchi, Giambattista Parascandolo

We procedurally generate an unbounded set of equations, and simultaneously pre-train a Transformer to predict the symbolic equation from a corresponding set of input-output-pairs.

regression Symbolic Regression

Vanishing Curvature and the Power of Adaptive Methods in Randomly Initialized Deep Networks

no code implementations7 Jun 2021 Antonio Orvieto, Jonas Kohler, Dario Pavllo, Thomas Hofmann, Aurelien Lucchi

This paper revisits the so-called vanishing gradient phenomenon, which commonly occurs in deep randomly initialized neural networks.

Learning Generative Models of Textured 3D Meshes from Real-World Images

1 code implementation ICCV 2021 Dario Pavllo, Jonas Kohler, Thomas Hofmann, Aurelien Lucchi

Recent advances in differentiable rendering have sparked an interest in learning generative models of textured 3D meshes from image collections.

Pose Estimation

Generative Minimization Networks: Training GANs Without Competition

no code implementations23 Mar 2021 Paulina Grnarova, Yannic Kilcher, Kfir Y. Levy, Aurelien Lucchi, Thomas Hofmann

Among known problems experienced by practitioners is the lack of convergence guarantees or convergence to a non-optimum cycle.

Direct-Search for a Class of Stochastic Min-Max Problems

no code implementations22 Feb 2021 Sotiris Anagnostidis, Aurelien Lucchi, Youssef Diouane

Recent applications in machine learning have renewed the interest of the community in min-max optimization problems.

Batch normalization provably avoids ranks collapse for randomly initialised deep networks

no code implementations NeurIPS 2020 Hadi Daneshmand, Jonas Kohler, Francis Bach, Thomas Hofmann, Aurelien Lucchi

Randomly initialized neural networks are known to become harder to train with increasing depth, unless architectural enhancements like residual connections and batch normalization are used.

Scalable Graph Networks for Particle Simulations

1 code implementation14 Oct 2020 Karolis Martinkus, Aurelien Lucchi, Nathanaël Perraudin

However, the dynamics of many real-world systems are challenging to learn due to the presence of nonlinear potentials and a number of interactions that scales quadratically with the number of particles $N$, as in the case of the N-body problem.

An Accelerated DFO Algorithm for Finite-sum Convex Functions

no code implementations ICML 2020 Yu-Wen Chen, Antonio Orvieto, Aurelien Lucchi

Derivative-free optimization (DFO) has recently gained a lot of momentum in machine learning, spawning interest in the community to design faster methods for problems where gradients are not accessible.

Randomized Block-Diagonal Preconditioning for Parallel Learning

no code implementations ICML 2020 Celestine Mendler-Dünner, Aurelien Lucchi

We study preconditioned gradient-based optimization methods where the preconditioning matrix has block-diagonal form.

Convolutional Generation of Textured 3D Meshes

1 code implementation NeurIPS 2020 Dario Pavllo, Graham Spinks, Thomas Hofmann, Marie-Francine Moens, Aurelien Lucchi

A key contribution of our work is the encoding of the mesh and texture as 2D representations, which are semantically aligned and can be easily modeled by a 2D convolutional GAN.

Emulation of cosmological mass maps with conditional generative adversarial networks

no code implementations17 Apr 2020 Nathanaël Perraudin, Sandro Marcon, Aurelien Lucchi, Tomasz Kacprzak

Weak gravitational lensing mass maps play a crucial role in understanding the evolution of structures in the universe and our ability to constrain cosmological models.

MS-SSIM SSIM

Batch Normalization Provably Avoids Rank Collapse for Randomly Initialised Deep Networks

no code implementations3 Mar 2020 Hadi Daneshmand, Jonas Kohler, Francis Bach, Thomas Hofmann, Aurelien Lucchi

Randomly initialized neural networks are known to become harder to train with increasing depth, unless architectural enhancements like residual connections and batch normalization are used.

Practical Accelerated Optimization on Riemannian Manifolds

no code implementations11 Feb 2020 Foivos Alimisis, Antonio Orvieto, Gary Bécigneul, Aurelien Lucchi

We develop a new Riemannian descent algorithm with an accelerated rate of convergence.

Optimization and Control

Controlling Style and Semantics in Weakly-Supervised Image Generation

1 code implementation ECCV 2020 Dario Pavllo, Aurelien Lucchi, Thomas Hofmann

We propose a weakly-supervised approach for conditional image generation of complex scenes where a user has fine control over objects appearing in the scene.

Conditional Image Generation

A Sub-sampled Tensor Method for Non-convex Optimization

no code implementations23 Nov 2019 Aurelien Lucchi, Jonas Kohler

We present a stochastic optimization method that uses a fourth-order regularized model to find local minima of smooth and potentially non-convex objective functions with a finite-sum structure.

Stochastic Optimization

Shadowing Properties of Optimization Algorithms

1 code implementation NeurIPS 2019 Antonio Orvieto, Aurelien Lucchi

Ordinary differential equation (ODE) models of gradient-based optimization methods can provide insights into the dynamics of learning and inspire the design of new algorithms.

A Continuous-time Perspective for Modeling Acceleration in Riemannian Optimization

1 code implementation23 Oct 2019 Foivos Alimisis, Antonio Orvieto, Gary Bécigneul, Aurelien Lucchi

We propose a novel second-order ODE as the continuous-time limit of a Riemannian accelerated gradient-based method on a manifold with curvature bounded from below.

Optimization and Control

Ellipsoidal Trust Region Methods for Neural Network Training

no code implementations25 Sep 2019 Leonard Adolphs, Jonas Kohler, Aurelien Lucchi

We investigate the use of ellipsoidal trust region constraints for second-order optimization of neural networks.

Cosmological N-body simulations: a challenge for scalable generative models

1 code implementation15 Aug 2019 Nathanaël Perraudin, Ankit Srivastava, Aurelien Lucchi, Tomasz Kacprzak, Thomas Hofmann, Alexandre Réfrégier

Our results show that the proposed model produces samples of high visual quality, although the statistical analysis reveals that capturing rare features in the data poses significant problems for the generative models.

The Role of Memory in Stochastic Optimization

no code implementations2 Jul 2019 Antonio Orvieto, Jonas Kohler, Aurelien Lucchi

We first derive a general continuous-time model that can incorporate arbitrary types of memory, for both deterministic and stochastic settings.

Stochastic Optimization

Cosmological constraints with deep learning from KiDS-450 weak lensing maps

no code implementations7 Jun 2019 Janis Fluri, Tomasz Kacprzak, Aurelien Lucchi, Alexandre Refregier, Adam Amara, Thomas Hofmann, Aurel Schneider

We present the cosmological results with a CNN from the KiDS-450 tomographic weak lensing dataset, constraining the total matter density $\Omega_m$, the fluctuation amplitude $\sigma_8$, and the intrinsic alignment amplitude $A_{\rm{IA}}$.

Cosmology and Nongalactic Astrophysics

Adaptive norms for deep learning with regularized Newton methods

no code implementations22 May 2019 Jonas Kohler, Leonard Adolphs, Aurelien Lucchi

We investigate the use of regularized Newton methods with adaptive norms for optimizing neural networks.

Evaluating GANs via Duality

no code implementations ICLR 2019 Paulina Grnarova, Kfir. Y. Levy, Aurelien Lucchi, Nathanael Perraudin, Thomas Hofmann, Andreas Krause

Generative Adversarial Networks (GANs) have shown great results in accurately modeling complex distributions, but their training is known to be difficult due to instabilities caused by a challenging minimax optimization problem.

A domain agnostic measure for monitoring and evaluating GANs

1 code implementation NeurIPS 2019 Paulina Grnarova, Kfir. Y. Levy, Aurelien Lucchi, Nathanael Perraudin, Ian Goodfellow, Thomas Hofmann, Andreas Krause

Evaluations are essential for: (i) relative assessment of different models and (ii) monitoring the progress of a single model throughout training.

Continuous-time Models for Stochastic Optimization Algorithms

1 code implementation NeurIPS 2019 Antonio Orvieto, Aurelien Lucchi

We propose new continuous-time formulations for first-order stochastic optimization algorithms such as mini-batch gradient descent and variance-reduced methods.

Stochastic Optimization

Cosmological constraints from noisy convergence maps through deep learning

no code implementations23 Jul 2018 Janis Fluri, Tomasz Kacprzak, Aurelien Lucchi, Alexandre Refregier, Adam Amara, Thomas Hofmann

We find that, for a shape noise level corresponding to 8. 53 galaxies/arcmin$^2$ and the smoothing scale of $\sigma_s = 2. 34$ arcmin, the network is able to generate 45% tighter constraints.

Cosmology and Nongalactic Astrophysics

A Distributed Second-Order Algorithm You Can Trust

no code implementations ICML 2018 Celestine Dünner, Aurelien Lucchi, Matilde Gargiani, An Bian, Thomas Hofmann, Martin Jaggi

Due to the rapid growth of data and computational resources, distributed optimization has become an active research area in recent years.

Distributed Optimization Second-order methods

Adversarially Robust Training through Structured Gradient Regularization

no code implementations22 May 2018 Kevin Roth, Aurelien Lucchi, Sebastian Nowozin, Thomas Hofmann

We propose a novel data-dependent structured gradient regularizer to increase the robustness of neural networks vis-a-vis adversarial perturbations.

Local Saddle Point Optimization: A Curvature Exploitation Approach

1 code implementation15 May 2018 Leonard Adolphs, Hadi Daneshmand, Aurelien Lucchi, Thomas Hofmann

Gradient-based optimization methods are the most popular choice for finding local optima for classical minimization and saddle point problems.

Escaping Saddles with Stochastic Gradients

no code implementations ICML 2018 Hadi Daneshmand, Jonas Kohler, Aurelien Lucchi, Thomas Hofmann

We analyze the variance of stochastic gradients along negative curvature directions in certain non-convex machine learning models and show that stochastic gradients exhibit a strong component along these directions.

Fast cosmic web simulations with generative adversarial networks

no code implementations27 Jan 2018 Andres C. Rodriguez, Tomasz Kacprzak, Aurelien Lucchi, Adam Amara, Raphael Sgier, Janis Fluri, Thomas Hofmann, Alexandre Réfrégier

Computational models of the underlying physical processes, such as classical N-body simulations, are extremely resource intensive, as they track the action of gravity in an expanding universe using billions of particles as tracers of the cosmic matter distribution.

Fast Point Spread Function Modeling with Deep Learning

no code implementations23 Jan 2018 Jörg Herbel, Tomasz Kacprzak, Adam Amara, Alexandre Refregier, Aurelien Lucchi

We find that our approach is able to accurately reproduce the SDSS PSF at the pixel level, which, due to the speed of both the model evaluation and the parameter estimation, offers good prospects for incorporating our method into the $MCCL$ framework.

Semantic Interpolation in Implicit Models

no code implementations ICLR 2018 Yannic Kilcher, Aurelien Lucchi, Thomas Hofmann

In implicit models, one often interpolates between sampled points in latent space.

Flexible Prior Distributions for Deep Generative Models

no code implementations ICLR 2018 Yannic Kilcher, Aurelien Lucchi, Thomas Hofmann

We consider the problem of training generative models with deep neural networks as generators, i. e. to map latent codes to data points.

Learning Aerial Image Segmentation from Online Maps

2 code implementations21 Jul 2017 Pascal Kaiser, Jan Dirk Wegner, Aurelien Lucchi, Martin Jaggi, Thomas Hofmann, Konrad Schindler

We adapt a state-of-the-art CNN architecture for semantic segmentation of buildings and roads in aerial images, and compare its performance when using different training data sets, ranging from manually labeled, pixel-accurate ground truth of the same city to automatic training data derived from OpenStreetMap data from distant locations.

General Classification Image Segmentation +2

Cosmological model discrimination with Deep Learning

no code implementations17 Jul 2017 Jorit Schmelzle, Aurelien Lucchi, Tomasz Kacprzak, Adam Amara, Raphael Sgier, Alexandre Réfrégier, Thomas Hofmann

We find that our implementation of DCNN outperforms the skewness and kurtosis statistics, especially for high noise levels.

Stabilizing Training of Generative Adversarial Networks through Regularization

1 code implementation NeurIPS 2017 Kevin Roth, Aurelien Lucchi, Sebastian Nowozin, Thomas Hofmann

Deep generative models based on Generative Adversarial Networks (GANs) have demonstrated impressive sample quality but in order to work they require a careful choice of architecture, parameter initialization, and selection of hyper-parameters.

Image Generation

Sub-sampled Cubic Regularization for Non-convex Optimization

1 code implementation ICML 2017 Jonas Moritz Kohler, Aurelien Lucchi

This approach is particularly attractive because it escapes strict saddle points and it provides stronger convergence guarantees than first- and second-order as well as classical trust region methods.

A Semi-supervised Framework for Image Captioning

1 code implementation16 Nov 2016 Wenhu Chen, Aurelien Lucchi, Thomas Hofmann

We here propose a novel way of using such textual data by artificially generating missing visual information.

Image Captioning Word Embeddings

Radio frequency interference mitigation using deep convolutional neural networks

3 code implementations28 Sep 2016 Joel Akeret, Chihway Chang, Aurelien Lucchi, Alexandre Refregier

We employ a special type of Convolutional Neural Network, the U-Net, that enables the classification of clean signal and RFI signatures in 2D time-ordered data acquired from a radio telescope.

Instrumentation and Methods for Astrophysics

DynaNewton - Accelerating Newton's Method for Machine Learning

no code implementations20 May 2016 Hadi Daneshmand, Aurelien Lucchi, Thomas Hofmann

Solutions on this path are tracked such that the minimizer of the previous objective is guaranteed to be within the quadratic convergence region of the next objective to be optimized.

BIG-bench Machine Learning

Starting Small -- Learning with Adaptive Sample Sizes

no code implementations9 Mar 2016 Hadi Daneshmand, Aurelien Lucchi, Thomas Hofmann

For many machine learning problems, data is abundant and it may be prohibitive to make multiple passes through the full training set.

BIG-bench Machine Learning

Probabilistic Bag-Of-Hyperlinks Model for Entity Linking

1 code implementation8 Sep 2015 Octavian-Eugen Ganea, Marina Ganea, Aurelien Lucchi, Carsten Eickhoff, Thomas Hofmann

We demonstrate the accuracy of our approach on a wide range of benchmark datasets, showing that it matches, and in many cases outperforms, existing state-of-the-art methods.

Entity Disambiguation Entity Linking +3

Variance Reduced Stochastic Gradient Descent with Neighbors

no code implementations NeurIPS 2015 Thomas Hofmann, Aurelien Lucchi, Simon Lacoste-Julien, Brian McWilliams

As a side-product we provide a unified convergence analysis for a family of variance reduction algorithms, which we call memorization algorithms.

Memorization

A Variance Reduced Stochastic Newton Method

no code implementations28 Mar 2015 Aurelien Lucchi, Brian McWilliams, Thomas Hofmann

Quasi-Newton methods are widely used in practise for convex loss minimization problems.

Learning for Structured Prediction Using Approximate Subgradient Descent with Working Sets

no code implementations CVPR 2013 Aurelien Lucchi, Yunpeng Li, Pascal Fua

We propose a working set based approximate subgradient descent algorithm to minimize the margin-sensitive hinge loss arising from the soft constraints in max-margin learning frameworks, such as the structured SVM.

Image Segmentation Semantic Segmentation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.