no code implementations • 30 Jul 2023 • Gabriele Lagani, Fabrizio Falchi, Claudio Gennaro, Giuseppe Amato
Recently emerged technologies based on Deep Learning (DL) achieved outstanding results on a variety of tasks in the field of Artificial Intelligence (AI).
no code implementations • 30 Jul 2023 • Gabriele Lagani, Fabrizio Falchi, Claudio Gennaro, Giuseppe Amato
For a long time, biology and neuroscience fields have been a great source of inspiration for computer scientists, towards the development of Artificial Intelligence (AI) technologies.
no code implementations • 7 Jul 2022 • Gabriele Lagani, Claudio Gennaro, Hannes Fassold, Giuseppe Amato
Learning algorithms for Deep Neural Networks are typically based on supervised end-to-end Stochastic Gradient Descent (SGD) training with error backpropagation (backprop).
no code implementations • 18 May 2022 • Gabriele Lagani, Davide Bacciu, Claudio Gallicchio, Fabrizio Falchi, Claudio Gennaro, Giuseppe Amato
Features extracted from Deep Neural Networks (DNNs) have proven to be very effective in the context of Content Based Image Retrieval (CBIR).
no code implementations • 16 Mar 2021 • Gabriele Lagani, Fabrizio Falchi, Claudio Gennaro, Giuseppe Amato
We propose to address the issue of sample efficiency, in Deep Convolutional Neural Networks (DCNN), with a semi-supervised training strategy that combines Hebbian learning with gradient descent: all internal layers (both convolutional and fully connected) are pre-trained using an unsupervised approach based on Hebbian learning, and the last fully connected layer (the classification layer) is trained using Stochastic Gradient Descent (SGD).
1 code implementation • 22 Dec 2020 • Gabriele Lagani, Giuseppe Amato, Fabrizio Falchi, Claudio Gennaro
In particular, it has been shown that Hebbian learning can be used for training the lower or the higher layers of a neural network.
1 code implementation • 18 Dec 2020 • Gabriele Lagani, Raffaele Mazziotti, Fabrizio Falchi, Claudio Gennaro, Guido Marco Cicchini, Tommaso Pizzorusso, Federico Cremisi, Giuseppe Amato
Previous work has shown that it is possible to train neuronal cultures on Multi-Electrode Arrays (MEAs), to recognize very simple patterns.
Cultural Vocal Bursts Intensity Prediction Handwritten Digit Recognition