no code implementations • 23 Oct 2024 • Lorenzo Aloisi, Luigi Sigillo, Aurelio Uncini, Danilo Comminiello
In recent years, diffusion models have emerged as a superior alternative to generative adversarial networks (GANs) for high-fidelity image generation, with wide applications in text-to-image generation, image-to-image translation, and super-resolution.
1 code implementation • 13 Sep 2024 • Eleonora Lopez, Aurelio Uncini, Danilo Comminiello
Then, a hypercomplex fusion module learns inter-modal relations among the embeddings of the different modalities.
no code implementations • 11 May 2024 • Danilo Comminiello, Eleonora Grassucci, Danilo P. Mandic, Aurelio Uncini
Hypercomplex algebras have recently been gaining prominence in the field of deep learning owing to the advantages of their division algebras over real vector spaces and their superior results when dealing with multidimensional signals in real-world 3D and 4D paradigms.
no code implementations • 14 Feb 2024 • Christian Marinoni, Riccardo Fosco Gramaccioni, Changan Chen, Aurelio Uncini, Danilo Comminiello
The primary goal of the L3DAS23 Signal Processing Grand Challenge at ICASSP 2023 is to promote and support collaborative research on machine learning for 3D audio signal processing, with a specific emphasis on 3D speech enhancement and 3D Sound Event Localization and Detection in Extended Reality applications.
Audio Signal Processing
Sound Event Localization and Detection
+1
1 code implementation • 16 Oct 2023 • Luigi Sigillo, Eleonora Grassucci, Aurelio Uncini, Danilo Comminiello
The proposed quaternion wavelet network (QUAVE) can be easily integrated with any pre-existing medical image analysis or synthesis task, and it can be involved with real, quaternion, or hypercomplex-valued models, generalizing their adoption to single-channel data.
1 code implementation • 11 Oct 2023 • Matteo Mancanelli, Eleonora Grassucci, Aurelio Uncini, Danilo Comminiello
Neural models based on hypercomplex algebra systems are growing and prolificating for a plethora of applications, ranging from computer vision to natural language processing.
1 code implementation • 3 Aug 2022 • Jary Pomponi, Simone Scardapane, Aurelio Uncini
In this paper, we propose a novel regularization method called Centroids Matching, that, inspired by meta-learning approaches, fights CF by operating in the feature space produced by the neural network, achieving good results while requiring a small memory footprint.
1 code implementation • 4 May 2022 • Eleonora Grassucci, Luigi Sigillo, Aurelio Uncini, Danilo Comminiello
Image-to-image translation (I2I) aims at transferring the content representation from an input domain to an output one, bouncing along different target domains.
Ranked #3 on
Image-to-Image Translation
on CelebA-HQ
1 code implementation • 4 Apr 2022 • Eleonora Grassucci, Gioia Mancini, Christian Brignone, Aurelio Uncini, Danilo Comminiello
We show that our dual quaternion SELD model with temporal convolution blocks (DualQSELD-TCN) achieves better results with respect to real and quaternion-valued baselines thanks to our augmented representation of the sound field.
Ranked #1 on
Sound Event Localization and Detection
on L3DAS21
1 code implementation • 21 Feb 2022 • Eric Guizzo, Christian Marinoni, Marco Pennese, Xinlei Ren, Xiguang Zheng, Chen Zhang, Bruno Masiero, Aurelio Uncini, Danilo Comminiello
The L3DAS22 Challenge is aimed at encouraging the development of machine learning strategies for 3D speech enhancement and 3D sound localization and detection in office-like environments.
1 code implementation • 11 Feb 2022 • Jary Pomponi, Simone Scardapane, Aurelio Uncini
We show that our method performs favorably with respect to state-of-the-art approaches in the literature, with bounded computational power and memory overheads.
1 code implementation • 4 Feb 2022 • Jary Pomponi, Simone Scardapane, Aurelio Uncini
Recent research has found that neural networks are vulnerable to several types of adversarial attacks, where the input samples are modified in such a way that the model produces a wrong prediction that misclassifies the adversarial sample.
1 code implementation • 20 Sep 2021 • Indro Spinelli, Simone Scardapane, Aurelio Uncini
Experiments on synthetic and real-world datasets for node and graph classification show that we can produce models that are consistently easier to explain by different algorithms.
2 code implementations • 6 May 2021 • Jary Pomponi, Simone Scardapane, Aurelio Uncini
In this paper, we propose a novel ensembling technique for deep neural networks, which is able to drastically reduce the required memory compared to alternative approaches.
1 code implementation • 29 Apr 2021 • Indro Spinelli, Simone Scardapane, Amir Hussain, Aurelio Uncini
Furthermore, to better evaluate the gains, we propose a new dyadic group definition to measure the bias of a link prediction task when paired with group-based fairness metrics.
no code implementations • 19 Apr 2021 • Danilo Comminiello, Alireza Nezamdoust, Simone Scardapane, Michele Scarpiniti, Amir Hussain, Aurelio Uncini
In order to make this class of functional link adaptive filters (FLAFs) efficient, we propose low-complexity expansions and frequency-domain adaptation of the parameters.
1 code implementation • 12 Apr 2021 • Eric Guizzo, Riccardo F. Gramaccioni, Saeid Jamili, Christian Marinoni, Edoardo Massaro, Claudia Medaglia, Giuseppe Nachira, Leonardo Nucciarelli, Ludovica Paglialunga, Marco Pennese, Sveva Pepe, Enrico Rocchi, Aurelio Uncini, Danilo Comminiello
The L3DAS21 Challenge is aimed at encouraging and fostering collaborative research on machine learning for 3D audio signal processing, with particular focus on 3D speech enhancement (SE) and 3D sound localization and detection (SELD).
3 code implementations • 22 Oct 2020 • Eleonora Grassucci, Danilo Comminiello, Aurelio Uncini
Deep probabilistic generative models have achieved incredible success in many fields of application.
no code implementations • 24 Jul 2020 • Danilo Comminiello, Michele Scarpiniti, Simone Scardapane, Luis A. Azpicueta-Ruiz, Aurelio Uncini
Nonlinear adaptive filters often show some sparse behavior due to the fact that not all the coefficients are equally useful for the modeling of any nonlinearity.
1 code implementation • ICML Workshop LifelongML 2020 • Jary Pomponi, Simone Scardapane, Aurelio Uncini
We show that our method performs favorably with respect to state-of-the-art approaches in the literature, with bounded computational power and memory overheads.
no code implementations • 27 Apr 2020 • Simone Scardapane, Michele Scarpiniti, Enzo Baccarelli, Aurelio Uncini
Deep neural networks are generally designed as a stack of differentiable layers, in which a prediction is obtained only after running the full stack.
4 code implementations • 2 Mar 2020 • Jary Pomponi, Simone Scardapane, Aurelio Uncini
Bayesian Neural Networks (BNNs) are trained to optimize an entire distribution over their weights instead of a single set, having significant advantages in terms of, e. g., interpretability, multi-task learning, and calibration.
1 code implementation • 24 Feb 2020 • Indro Spinelli, Simone Scardapane, Aurelio Uncini
Graph convolutional networks (GCNs) are a family of neural network models that perform inference on graph data by interleaving vertex-wise operations and message-passing exchanges across nodes.
1 code implementation • 9 Sep 2019 • Jary Pomponi, Simone Scardapane, Vincenzo Lomonaco, Aurelio Uncini
Continual learning of deep neural networks is a key requirement for scaling them up to more complex applicative scenarios and for achieving real lifelong learning of these architectures.
no code implementations • 8 Aug 2019 • Antonio Falvo, Danilo Comminiello, Simone Scardapane, Michele Scarpiniti, Aurelio Uncini
In this paper, we present a deep learning method that is able to reconstruct subsampled MR images obtained by reducing the k-space data, while maintaining a high image quality that can be used to observe brain lesions.
no code implementations • 26 Jul 2019 • Riccardo Vecchi, Simone Scardapane, Danilo Comminiello, Aurelio Uncini
To this end, we investigate two extensions of l1 and structured regularization to the quaternion domain.
no code implementations • 20 Jun 2019 • Indro Spinelli, Simone Scardapane, Michele Scarpiniti, Aurelio Uncini
Recently, data augmentation in the semi-supervised regime, where unlabeled data vastly outnumbers labeled data, has received a considerable attention.
1 code implementation • 6 May 2019 • Indro Spinelli, Simone Scardapane, Aurelio Uncini
We also explore a few extensions to the basic architecture involving the use of residual connections between layers, and of global statistics computed from the data set to improve the accuracy.
no code implementations • 28 Mar 2019 • Michele Cirillo, Simone Scardapane, Steven Van Vaerenbergh, Aurelio Uncini
In this brief we investigate the generalization properties of a recently-proposed class of non-parametric activation functions, the kernel activation functions (KAFs).
no code implementations • 6 Feb 2019 • Simone Scardapane, Steven Van Vaerenbergh, Danilo Comminiello, Aurelio Uncini
Complex-valued neural networks (CVNNs) have been shown to be powerful nonlinear approximators when the input data can be properly modeled in the complex domain.
no code implementations • 17 Dec 2018 • Danilo Comminiello, Marco Lella, Simone Scardapane, Aurelio Uncini
Learning from data in the quaternion domain enables us to exploit internal dependencies of 4D signals and treating them as a single entity.
no code implementations • 11 Jul 2018 • Simone Scardapane, Steven Van Vaerenbergh, Danilo Comminiello, Simone Totaro, Aurelio Uncini
Gated recurrent neural networks have achieved remarkable results in the analysis of sequential data.
no code implementations • 26 Feb 2018 • Simone Scardapane, Steven Van Vaerenbergh, Danilo Comminiello, Aurelio Uncini
Graph neural networks (GNNs) are a class of neural networks that allow to efficiently perform inference on data that is associated to a graph structure, such as, e. g., citation networks or knowledge graphs.
2 code implementations • 22 Feb 2018 • Simone Scardapane, Steven Van Vaerenbergh, Amir Hussain, Aurelio Uncini
Complex-valued neural networks (CVNNs) are a powerful modeling tool for domains where data can be naturally interpreted in terms of complex numbers.
2 code implementations • 13 Jul 2017 • Simone Scardapane, Steven Van Vaerenbergh, Simone Totaro, Aurelio Uncini
Neural networks are generally built by interleaving (adaptable) linear layers with (fixed) nonlinear activation functions.
1 code implementation • 2 Jul 2016 • Simone Scardapane, Danilo Comminiello, Amir Hussain, Aurelio Uncini
In this paper, we consider the joint task of simultaneously optimizing (i) the weights of a deep neural network, (ii) the number of neurons for each hidden layer, and (iii) the subset of active input features (i. e., feature selection).
no code implementations • 25 May 2016 • Michele Scarpiniti, Simone Scardapane, Danilo Comminiello, Raffaele Parisi, Aurelio Uncini
In this paper, we derive a modified InfoMax algorithm for the solution of Blind Signal Separation (BSS) problems by using advanced stochastic methods.
no code implementations • 18 May 2016 • Simone Scardapane, Michele Scarpiniti, Danilo Comminiello, Aurelio Uncini
Neural networks require a careful design in order to perform properly on a given task.