Search Results for author: Gabriele Lagani

Found 7 papers, 2 papers with code

Synaptic Plasticity Models and Bio-Inspired Unsupervised Deep Learning: A Survey

no code implementations30 Jul 2023 Gabriele Lagani, Fabrizio Falchi, Claudio Gennaro, Giuseppe Amato

Recently emerged technologies based on Deep Learning (DL) achieved outstanding results on a variety of tasks in the field of Artificial Intelligence (AI).

Spiking Neural Networks and Bio-Inspired Supervised Deep Learning: A Survey

no code implementations30 Jul 2023 Gabriele Lagani, Fabrizio Falchi, Claudio Gennaro, Giuseppe Amato

For a long time, biology and neuroscience fields have been a great source of inspiration for computer scientists, towards the development of Artificial Intelligence (AI) technologies.

FastHebb: Scaling Hebbian Training of Deep Neural Networks to ImageNet Level

no code implementations7 Jul 2022 Gabriele Lagani, Claudio Gennaro, Hannes Fassold, Giuseppe Amato

Learning algorithms for Deep Neural Networks are typically based on supervised end-to-end Stochastic Gradient Descent (SGD) training with error backpropagation (backprop).

Deep Features for CBIR with Scarce Data using Hebbian Learning

no code implementations18 May 2022 Gabriele Lagani, Davide Bacciu, Claudio Gallicchio, Fabrizio Falchi, Claudio Gennaro, Giuseppe Amato

Features extracted from Deep Neural Networks (DNNs) have proven to be very effective in the context of Content Based Image Retrieval (CBIR).

Content-Based Image Retrieval Retrieval +1

Hebbian Semi-Supervised Learning in a Sample Efficiency Setting

no code implementations16 Mar 2021 Gabriele Lagani, Fabrizio Falchi, Claudio Gennaro, Giuseppe Amato

We propose to address the issue of sample efficiency, in Deep Convolutional Neural Networks (DCNN), with a semi-supervised training strategy that combines Hebbian learning with gradient descent: all internal layers (both convolutional and fully connected) are pre-trained using an unsupervised approach based on Hebbian learning, and the last fully connected layer (the classification layer) is trained using Stochastic Gradient Descent (SGD).

Object Recognition

Training Convolutional Neural Networks With Hebbian Principal Component Analysis

1 code implementation22 Dec 2020 Gabriele Lagani, Giuseppe Amato, Fabrizio Falchi, Claudio Gennaro

In particular, it has been shown that Hebbian learning can be used for training the lower or the higher layers of a neural network.

Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.