Search Results for author: Luca Saglietti

Found 15 papers, 1 papers with code

The twin peaks of learning neural networks

no code implementations23 Jan 2024 Elizaveta Demyanenko, Christoph Feinauer, Enrico M. Malatesta, Luca Saglietti

Recent works demonstrated the existence of a double-descent phenomenon for the generalization error of neural networks, where highly overparameterized models escape overfitting and achieve good test performance, at odds with the standard bias-variance trade-off described by statistical learning theory.

Learning Theory

The star-shaped space of solutions of the spherical negative perceptron

no code implementations18 May 2023 Brandon Livio Annesi, Clarissa Lauditi, Carlo Lucibello, Enrico M. Malatesta, Gabriele Perugini, Fabrizio Pittorino, Luca Saglietti

Empirical studies on the landscape of neural networks have shown that low-energy configurations are often found in complex connected structures, where zero-energy paths between pairs of distant solutions can be constructed.

Optimal transfer protocol by incremental layer defrosting

no code implementations2 Mar 2023 Federica Gerace, Diego Doimo, Stefano Sarao Mannelli, Luca Saglietti, Alessandro Laio

The simplest transfer learning protocol is based on ``freezing" the feature-extractor layers of a network pre-trained on a data-rich source task, and then adapting only the last layers to a data-poor target task.

Transfer Learning

Bias-inducing geometries: an exactly solvable data model with fairness implications

no code implementations31 May 2022 Stefano Sarao Mannelli, Federica Gerace, Negar Rostamzadeh, Luca Saglietti

Then, we consider a novel mitigation strategy based on a matched inference approach, consisting in the introduction of coupled learning models.

Fairness

An Analytical Theory of Curriculum Learning in Teacher-Student Networks

no code implementations15 Jun 2021 Luca Saglietti, Stefano Sarao Mannelli, Andrew Saxe

To study the former, we provide an exact description of the online learning setting, confirming the long-standing experimental observation that curricula can modestly speed up learning.

Probing transfer learning with a model of synthetic correlated datasets

no code implementations9 Jun 2021 Federica Gerace, Luca Saglietti, Stefano Sarao Mannelli, Andrew Saxe, Lenka Zdeborová

Transfer learning can significantly improve the sample efficiency of neural networks, by exploiting the relatedness between a data-scarce target task and a data-abundant source task.

Binary Classification Transfer Learning

Solvable Model for Inheriting the Regularization through Knowledge Distillation

no code implementations1 Dec 2020 Luca Saglietti, Lenka Zdeborová

In recent years the empirical success of transfer learning with neural networks has stimulated an increasing interest in obtaining a theoretical understanding of its core properties.

Knowledge Distillation Transfer Learning

Large deviations for the perceptron model and consequences for active learning

no code implementations9 Dec 2019 Hugo Cui, Luca Saglietti, Lenka Zdeborová

These large deviations then provide optimal achievable performance boundaries for any active learning algorithm.

Active Learning

Generalized Approximate Survey Propagation for High-Dimensional Estimation

no code implementations13 May 2019 Luca Saglietti, Yue M. Lu, Carlo Lucibello

In Generalized Linear Estimation (GLE) problems, we seek to estimate a signal that is observed through a linear transform followed by a component-wise, possibly nonlinear and noisy, channel.

Retrieval Vocal Bursts Intensity Prediction

Gaussian Process Prior Variational Autoencoders

2 code implementations NeurIPS 2018 Francesco Paolo Casale, Adrian V. Dalca, Luca Saglietti, Jennifer Listgarten, Nicolo Fusi

In this work, we introduce a new model, the Gaussian Process (GP) Prior Variational Autoencoder (GPPVAE), to specifically address this issue.

Time Series Time Series Analysis

On the role of synaptic stochasticity in training low-precision neural networks

no code implementations26 Oct 2017 Carlo Baldassi, Federica Gerace, Hilbert J. Kappen, Carlo Lucibello, Luca Saglietti, Enzo Tartaglione, Riccardo Zecchina

Stochasticity and limited precision of synaptic weights in neural network models are key aspects of both biological and hardware modeling of learning processes.

Unreasonable Effectiveness of Learning Neural Networks: From Accessible States and Robust Ensembles to Basic Algorithmic Schemes

no code implementations20 May 2016 Carlo Baldassi, Christian Borgs, Jennifer Chayes, Alessandro Ingrosso, Carlo Lucibello, Luca Saglietti, Riccardo Zecchina

We define a novel measure, which we call the "robust ensemble" (RE), which suppresses trapping by isolated configurations and amplifies the role of these dense regions.

Learning may need only a few bits of synaptic precision

no code implementations12 Feb 2016 Carlo Baldassi, Federica Gerace, Carlo Lucibello, Luca Saglietti, Riccardo Zecchina

Learning in neural networks poses peculiar challenges when using discretized rather then continuous synaptic states.

Local entropy as a measure for sampling solutions in Constraint Satisfaction Problems

no code implementations18 Nov 2015 Carlo Baldassi, Alessandro Ingrosso, Carlo Lucibello, Luca Saglietti, Riccardo Zecchina

We introduce a novel Entropy-driven Monte Carlo (EdMC) strategy to efficiently sample solutions of random Constraint Satisfaction Problems (CSPs).

Subdominant Dense Clusters Allow for Simple Learning and High Computational Performance in Neural Networks with Discrete Synapses

no code implementations18 Sep 2015 Carlo Baldassi, Alessandro Ingrosso, Carlo Lucibello, Luca Saglietti, Riccardo Zecchina

We also show that the dense regions are surprisingly accessible by simple learning protocols, and that these synaptic configurations are robust to perturbations and generalize better than typical solutions.

Cannot find the paper you are looking for? You can Submit a new open access paper.