no code implementations • 13 Jan 2024 • Linda Albanese, Adriano Barra, Pierluigi Bianco, Fabrizio Durante, Diego Pallara

Recently, the original storage prescription for the Hopfield model of neural networks -- as well as for its dense generalizations -- has been turned into a genuine Hebbian learning rule by postulating the expression of its Hamiltonian for both the supervised and unsupervised protocols.

no code implementations • 15 Dec 2023 • Linda Albanese, Andrea Alessandrelli, Alessia Annibale, Adriano Barra

Statistical mechanics of spin glasses is one of the main strands toward a comprehension of information processing by neural networks and learning machines.

no code implementations • 8 Aug 2023 • Elena Agliari, Andrea Alessandrelli, Adriano Barra, Federico Ricci-Tersenghi

A modern challenge of Artificial Intelligence is learning multiple patterns at once (i. e. parallel learning).

no code implementations • 17 Jul 2023 • Martino Salomone Centonze, Ido Kanter, Adriano Barra

We study bi-directional associative neural networks that, exposed to noisy examples of an extensive number of random archetypes, learn the latter (with or without the presence of a teacher) when the supplied information is enough: in this setting, learning is heteroassociative -- involving couples of patterns -- and it is achieved by reverberating the information depicted from the examples through the layers of the network.

no code implementations • 25 Nov 2022 • Elena Agliari, Linda Albanese, Francesco Alemanno, Andrea Alessandrelli, Adriano Barra, Fosca Giannotti, Daniele Lotito, Dino Pedreschi

We consider dense, associative neural-networks trained by a teacher (i. e., with supervision) and we investigate their computational capabilities analytically, via statistical-mechanics of spin glasses, and numerically, via Monte Carlo simulations.

no code implementations • 25 Nov 2022 • Elena Agliari, Linda Albanese, Francesco Alemanno, Andrea Alessandrelli, Adriano Barra, Fosca Giannotti, Daniele Lotito, Dino Pedreschi

We consider dense, associative neural-networks trained with no supervision and we investigate their computational capabilities analytically, via a statistical-mechanics approach, and numerically, via Monte Carlo simulations.

no code implementations • 17 Nov 2022 • Adriano Barra, Giovanni Catania, Aurélien Decelle, Beatriz Seoane

Introduced by Kosko in 1988 as a generalization of the Hopfield model to a bipartite structure, the simplest architecture is defined by two layers of neurons, with synaptic connections only between units of different layers: even without internal connections within each layer, information storage and retrieval are still possible through the reverberation of neural activities passing from one layer to another.

no code implementations • 2 Jul 2022 • Elena Agliari, Miriam Aquaro, Adriano Barra, Alberto Fachechi, Chiara Marullo

As well known, Hebb's learning traces its origin in Pavlov's Classical Conditioning, however, while the former has been extensively modelled in the past decades (e. g., by Hopfield model and countless variations on theme), as for the latter modelling has remained largely unaddressed so far; further, a bridge between these two pillars is totally lacking.

no code implementations • 17 Apr 2022 • Miriam Aquaro, Francesco Alemanno, Ido Kanter, Fabrizio Durante, Elena Agliari, Adriano Barra

The gap between the huge volumes of data needed to train artificial neural networks and the relatively small amount of data needed by their biological counterparts is a central puzzle in machine learning.

no code implementations • 2 Mar 2022 • Francesco Alemanno, Miriam Aquaro, Ido Kanter, Adriano Barra, Elena Agliari

In neural network's Literature, Hebbian learning traditionally refers to the procedure by which the Hopfield model and its generalizations store archetypes (i. e., definite patterns that are experienced just once to form the synaptic matrix).

no code implementations • 1 Sep 2021 • Elena Agliari, Francesco Alemanno, Adriano Barra, Giordano De Marzo

We consider restricted Boltzmann machine (RBMs) trained over an unstructured dataset made of blurred copies of definite but unavailable ``archetypes'' and we show that there exists a critical sample size beyond which the RBM can learn archetypes, namely the machine can successfully play as a generative model or as a classifier, according to the operational routine.

no code implementations • 4 Dec 2019 • Elena Agliari, Pablo J. Sáez, Adriano Barra, Matthieu Piel, Pablo Vargas, Michele Castellana

In the first experiment, cell migrate in a wound-healing model: when applied to this experiment, the inference method predicts the existence of cell-cell interactions, correctly mirroring the strong intercellular contacts which are present in the experiment.

no code implementations • 28 Nov 2019 • Elena Agliari, Francesco Alemanno, Adriano Barra, Martino Centonze, Alberto Fachechi

We consider a three-layer Sejnowski machine and show that features learnt via contrastive divergence have a dual representation as patterns in a dense associative memory of order P=4.

no code implementations • 21 Dec 2018 • Elena Agliari, Francesco Alemanno, Adriano Barra, Alberto Fachechi

Recently a daily routine for associative neural networks has been proposed: the network Hebbian-learns during the awake state (thus behaving as a standard Hopfield model), then, during its sleep state, optimizing information storage, it consolidates pure patterns and removes spurious ones: this forces the synaptic matrix to collapse to the projector one (ultimately approaching the Kanter-Sompolinksy model).

no code implementations • 29 Oct 2018 • Alberto Fachechi, Elena Agliari, Adriano Barra

The standard Hopfield model for associative neural networks accounts for biological Hebbian learning and acts as the harmonic oscillator for pattern recognition, however its maximal storage capacity is $\alpha \sim 0. 14$, far from the theoretical bound for symmetric networks, i. e. $\alpha =1$.

no code implementations • 5 Jan 2018 • Adriano Barra, Matteo Beccaria, Alberto Fachechi

We propose a modification of the cost function of the Hopfield model whose salient features shine in its Taylor expansion and result in more than pairwise interactions with alternate signs, suggesting a unified framework for handling both with deep learning and network pruning.

no code implementations • 20 Feb 2017 • Adriano Barra, Giuseppe Genovese, Peter Sollich, Daniele Tantari

Restricted Boltzmann Machines are described by the Gibbs measure of a bipartite spin glass, which in turn corresponds to the one of a generalised Hopfield network.

no code implementations • 9 Dec 2016 • Adriano Barra, Giuseppe Genovese, Peter Sollich, Daniele Tantari

We study Generalised Restricted Boltzmann Machines with generic priors for units and weights, interpolating between Boolean and Gaussian variables.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.