Search Results for author: Riccardo Zecchina

Found 22 papers, 3 papers with code

Impact of dendritic non-linearities on the computational capabilities of neurons

no code implementations10 Jul 2024 Clarissa Lauditi, Enrico M. Malatesta, Fabrizio Pittorino, Carlo Baldassi, Nicolas Brunel, Riccardo Zecchina

Multiple neurophysiological experiments have shown that dendritic non-linearities can have a strong influence on synaptic input integration.

Typical and atypical solutions in non-convex neural networks with discrete and continuous weights

no code implementations26 Apr 2023 Carlo Baldassi, Enrico M. Malatesta, Gabriele Perugini, Riccardo Zecchina

We analyze the geometry of the landscape of solutions in both models and find important similarities and differences.

Quantum Approximate Optimization Algorithm applied to the binary perceptron

no code implementations19 Dec 2021 Pietro Torta, Glen B. Mbeng, Carlo Baldassi, Riccardo Zecchina, Giuseppe E. Santoro

We apply digitized Quantum Annealing (QA) and Quantum Approximate Optimization Algorithm (QAOA) to a paradigmatic task of supervised learning in artificial neural networks: the optimization of synaptic weights for the binary perceptron.

Native state of natural proteins optimises local entropy

1 code implementation25 Nov 2021 Matteo Negri, Guido Tiana, Riccardo Zecchina

The differing ability of polypeptide conformations to act as the native state of proteins has long been rationalized in terms of differing kinetic accessibility or thermodynamic stability.

Deep learning via message passing algorithms based on belief propagation

no code implementations27 Oct 2021 Carlo Lucibello, Fabrizio Pittorino, Gabriele Perugini, Riccardo Zecchina

Message-passing algorithms based on the Belief Propagation (BP) equations constitute a well-known distributed computational scheme.

Continual Learning

Unveiling the structure of wide flat minima in neural networks

no code implementations2 Jul 2021 Carlo Baldassi, Clarissa Lauditi, Enrico M. Malatesta, Gabriele Perugini, Riccardo Zecchina

The success of deep learning has revealed the application potential of neural networks across the sciences and opened up fundamental theoretical problems.

Wide flat minima and optimal generalization in classifying high-dimensional Gaussian mixtures

no code implementations27 Oct 2020 Carlo Baldassi, Enrico M. Malatesta, Matteo Negri, Riccardo Zecchina

We analyze the connection between minimizers with good generalizing properties and high local entropy regions of a threshold-linear classifier in Gaussian mixtures with the mean squared error loss function.

Clustering of solutions in the symmetric binary perceptron

no code implementations15 Nov 2019 Carlo Baldassi, Riccardo Della Vecchia, Carlo Lucibello, Riccardo Zecchina

The geometrical features of the (non-convex) loss landscape of neural network models are crucial in ensuring successful optimization and, most importantly, the capability to generalize well.

Clustering

Natural representation of composite data with replicated autoencoders

no code implementations29 Sep 2019 Matteo Negri, Davide Bergamini, Carlo Baldassi, Riccardo Zecchina, Christoph Feinauer

Generative processes in biology and other fields often produce data that can be regarded as resulting from a composition of basic features.

Shaping the learning landscape in neural networks around wide flat minima

no code implementations20 May 2019 Carlo Baldassi, Fabrizio Pittorino, Riccardo Zecchina

In the case of SGD and cross-entropy loss, we show that a slow reduction of the norm of the weights along the learning process also leads to WFM.

Open-Ended Question Answering

On the role of synaptic stochasticity in training low-precision neural networks

no code implementations26 Oct 2017 Carlo Baldassi, Federica Gerace, Hilbert J. Kappen, Carlo Lucibello, Luca Saglietti, Enzo Tartaglione, Riccardo Zecchina

Stochasticity and limited precision of synaptic weights in neural network models are key aspects of both biological and hardware modeling of learning processes.

Parle: parallelizing stochastic gradient descent

no code implementations3 Jul 2017 Pratik Chaudhari, Carlo Baldassi, Riccardo Zecchina, Stefano Soatto, Ameet Talwalkar, Adam Oberman

We propose a new algorithm called Parle for parallel training of deep networks that converges 2-4x faster than a data-parallel implementation of SGD, while achieving significantly improved error rates that are nearly state-of-the-art on several benchmarks including CIFAR-10 and CIFAR-100, without introducing any additional hyper-parameters.

Efficiency of quantum versus classical annealing in non-convex learning problems

no code implementations26 Jun 2017 Carlo Baldassi, Riccardo Zecchina

Their energy landscapes is dominated by local minima that cause exponential slow down of classical thermal annealers while simulated quantum annealing converges efficiently to rare dense regions of optimal solutions.

Entropy-SGD: Biasing Gradient Descent Into Wide Valleys

2 code implementations6 Nov 2016 Pratik Chaudhari, Anna Choromanska, Stefano Soatto, Yann Lecun, Carlo Baldassi, Christian Borgs, Jennifer Chayes, Levent Sagun, Riccardo Zecchina

This paper proposes a new optimization algorithm called Entropy-SGD for training deep neural networks that is motivated by the local geometry of the energy landscape.

Unreasonable Effectiveness of Learning Neural Networks: From Accessible States and Robust Ensembles to Basic Algorithmic Schemes

no code implementations20 May 2016 Carlo Baldassi, Christian Borgs, Jennifer Chayes, Alessandro Ingrosso, Carlo Lucibello, Luca Saglietti, Riccardo Zecchina

We define a novel measure, which we call the "robust ensemble" (RE), which suppresses trapping by isolated configurations and amplifies the role of these dense regions.

Learning may need only a few bits of synaptic precision

no code implementations12 Feb 2016 Carlo Baldassi, Federica Gerace, Carlo Lucibello, Luca Saglietti, Riccardo Zecchina

Learning in neural networks poses peculiar challenges when using discretized rather then continuous synaptic states.

Local entropy as a measure for sampling solutions in Constraint Satisfaction Problems

no code implementations18 Nov 2015 Carlo Baldassi, Alessandro Ingrosso, Carlo Lucibello, Luca Saglietti, Riccardo Zecchina

We introduce a novel Entropy-driven Monte Carlo (EdMC) strategy to efficiently sample solutions of random Constraint Satisfaction Problems (CSPs).

Subdominant Dense Clusters Allow for Simple Learning and High Computational Performance in Neural Networks with Discrete Synapses

no code implementations18 Sep 2015 Carlo Baldassi, Alessandro Ingrosso, Carlo Lucibello, Luca Saglietti, Riccardo Zecchina

We also show that the dense regions are surprisingly accessible by simple learning protocols, and that these synaptic configurations are robust to perturbations and generalize better than typical solutions.

Cannot find the paper you are looking for? You can Submit a new open access paper.