This paper presents a novel GP linear multiple kernel (LMK) and a generic sparsity-aware distributed learning framework to optimize the hyper-parameters.
The Gaussian process state-space model (GPSSM) has attracted extensive attention for modeling complex nonlinear dynamical systems.
In this work, we propose a Multi-Window Masked Autoencoder (MW-MAE) fitted with a novel Multi-Window Multi-Head Attention (MW-MHA) module that facilitates the modelling of local-global interactions in every decoder transformer block through attention heads of several distinct local and global windows.
However, a come back of Bayesian methods is taking place that sheds new light on the design of deep neural networks, which also establish firm links with Bayesian models and inspire new paths for unsupervised learning, such as Bayesian tensor decomposition.
This work explores the potency of stochastic competition-based activations, namely Stochastic Local Winner-Takes-All (LWTA), against powerful (gradient-based) white-box and black-box adversarial attacks; we especially focus on Adversarial Training settings.
Ranked #2 on Adversarial Robustness on CIFAR-10
Speech is the most common way humans express their feelings, and sentiment analysis is the use of tools such as natural language processing and computational algorithms to identify the polarity of these feelings.
The main operating principle of the introduced units lies on stochastic arguments, as the network performs posterior sampling over competing units to select the winner.
However, the optimal determination of a tensor rank is known to be a non-deterministic polynomial-time hard (NP-hard) task.
The fusion methods reported so far ignore the underlying multi-way nature of the data in at least one of the modalities and/or rely on very strong assumptions about the relation of the two datasets.
In this overview paper, data-driven learning model-based cooperative localization and location data processing are considered, in line with the emerging machine learning and big data methods.
Hidden Markov Models (HMMs) comprise a powerful generative approach for modeling sequential data and time-series in general.
Gaussian processes (GP) for machine learning have been studied systematically over the past two decades and they are by now widely used in a number of diverse applications.
Then, we propose a novel SM kernel with a dependency structure (SMD) by using cross-convolution between the SM components.
To this end, we revisit deep networks that comprise competing linear units, as opposed to nonlinear units that do not entail any form of (local) competition.
In this paper, we propose a novel unsupervised learning method to learn the brain dynamics using a deep learning architecture named residual D-net.
The new method allows the incorporation of a priori knowledge associated both with the experimental design as well as with available brain Atlases.
To the best of our knowledge, this is the first time that a complete protocol for distributed online learning in RKHS is presented.
Extracting information from functional magnetic resonance (fMRI) images has been a major area of research for more than two decades.
The growing use of neuroimaging technologies generates a massive amount of biomedical data that exhibit high dimensionality.
Finally, the proposed robust estimation framework is applied to the task of image denoising, and its enhanced performance in the presence of outliers is demonstrated.
The method exploits the notion of widely linear estimation to model the input-out relation for complex-valued data and considers two cases: a) the complex data are split into their real and imaginary parts and a typical real kernel is employed to map the complex data to a complexified feature space and b) a pure complex kernel is used to directly map the data to the induced complex feature space.