Search Results for author: Francesco Tonin

Found 10 papers, 6 papers with code

Primal-Attention: Self-attention through Asymmetric Kernel SVD in Primal Representation

1 code implementation NeurIPS 2023 Yingyi Chen, Qinghua Tao, Francesco Tonin, Johan A. K. Suykens

To the best of our knowledge, this is the first work that provides a primal-dual representation for the asymmetric kernel in self-attention and successfully applies it to modeling and optimization.

D4RL Long-range modeling +2

Tensor-based Multi-view Spectral Clustering via Shared Latent Space

1 code implementation23 Jul 2022 Qinghua Tao, Francesco Tonin, Panagiotis Patrinos, Johan A. K. Suykens

In our method, the dual variables, playing the role of hidden features, are shared by all views to construct a common latent space, coupling the views by learning projections from view-specific spaces.

Clustering

Deep Kernel Principal Component Analysis for Multi-level Feature Learning

1 code implementation22 Feb 2023 Francesco Tonin, Qinghua Tao, Panagiotis Patrinos, Johan A. K. Suykens

Principal Component Analysis (PCA) and its nonlinear extension Kernel PCA (KPCA) are widely used across science and industry for data analysis and dimensionality reduction.

Dimensionality Reduction

Extending Kernel PCA through Dualization: Sparsity, Robustness and Fast Algorithms

1 code implementation9 Jun 2023 Francesco Tonin, Alex Lambert, Panagiotis Patrinos, Johan A. K. Suykens

The goal of this paper is to revisit Kernel Principal Component Analysis (KPCA) through dualization of a difference of convex functions.

Nonlinear SVD with Asymmetric Kernels: feature learning and asymmetric Nyström method

no code implementations12 Jun 2023 Qinghua Tao, Francesco Tonin, Panagiotis Patrinos, Johan A. K. Suykens

We describe a nonlinear extension of the matrix Singular Value Decomposition through asymmetric kernels, namely KSVD.

Combining Primal and Dual Representations in Deep Restricted Kernel Machines Classifiers

no code implementations12 Jun 2023 Francesco Tonin, Panagiotis Patrinos, Johan A. K. Suykens

In the context of deep learning with kernel machines, the deep Restricted Kernel Machine (DRKM) framework allows multiple levels of kernel PCA (KPCA) and Least-Squares Support Vector Machines (LSSVM) to be combined into a deep architecture using visible and hidden units.

Classification

Self-Attention through Kernel-Eigen Pair Sparse Variational Gaussian Processes

no code implementations2 Feb 2024 Yingyi Chen, Qinghua Tao, Francesco Tonin, Johan A. K. Suykens

In this work, we propose Kernel-Eigen Pair Sparse Variational Gaussian Processes (KEP-SVGP) for building uncertainty-aware self-attention where the asymmetry of attention kernels is tackled by Kernel SVD (KSVD) and a reduced complexity is acquired.

Gaussian Processes Variational Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.