Search Results for author: Johan A. K. Suykens

Found 70 papers, 21 papers with code

Boosting Co-teaching with Compression Regularization for Label Noise

1 code implementation28 Apr 2021 Yingyi Chen, Xi Shen, Shell Xu Hu, Johan A. K. Suykens

On Clothing1M, our approach obtains 74. 9% accuracy which is slightly better than that of DivideMix.

Ranked #12 on Image Classification on Clothing1M (using extra training data)

Data Compression Learning with noisy labels +1

Compressing Features for Learning with Noisy Labels

1 code implementation27 Jun 2022 Yingyi Chen, Shell Xu Hu, Xi Shen, Chunrong Ai, Johan A. K. Suykens

This decomposition provides three insights: (i) it shows that over-fitting is indeed an issue for learning with noisy labels; (ii) through an information bottleneck formulation, it explains why the proposed feature compression helps in combating label noise; (iii) it gives explanations on the performance boost brought by incorporating compression regularization into Co-teaching.

Feature Compression Feature Importance +2

Jigsaw-ViT: Learning Jigsaw Puzzles in Vision Transformer

1 code implementation25 Jul 2022 Yingyi Chen, Xi Shen, Yahui Liu, Qinghua Tao, Johan A. K. Suykens

In this paper, we explore solving jigsaw puzzle as a self-supervised auxiliary loss in ViT for image classification, named Jigsaw-ViT.

Classification Domain Generalization +2

A Theoretical Framework for Target Propagation

2 code implementations NeurIPS 2020 Alexander Meulemans, Francesco S. Carzaniga, Johan A. K. Suykens, João Sacramento, Benjamin F. Grewe

Here, we analyze target propagation (TP), a popular but not yet fully understood alternative to BP, from the standpoint of mathematical optimization.

A Robust Ensemble Approach to Learn From Positive and Unlabeled Data Using SVM Base Models

1 code implementation13 Feb 2014 Marc Claesen, Frank De Smet, Johan A. K. Suykens, Bart De Moor

The included benchmark comprises three settings with increasing label noise: (i) fully supervised, (ii) PU learning and (iii) PU learning with false positives.

Primal-Attention: Self-attention through Asymmetric Kernel SVD in Primal Representation

1 code implementation NeurIPS 2023 Yingyi Chen, Qinghua Tao, Francesco Tonin, Johan A. K. Suykens

To the best of our knowledge, this is the first work that provides a primal-dual representation for the asymmetric kernel in self-attention and successfully applies it to modeling and optimization.

D4RL Long-range modeling +2

Fast Prediction with SVM Models Containing RBF Kernels

1 code implementation4 Mar 2014 Marc Claesen, Frank De Smet, Johan A. K. Suykens, Bart De Moor

We present an approximation scheme for support vector machine models that use an RBF kernel.

Tensor-based Multi-view Spectral Clustering via Shared Latent Space

1 code implementation23 Jul 2022 Qinghua Tao, Francesco Tonin, Panagiotis Patrinos, Johan A. K. Suykens

In our method, the dual variables, playing the role of hidden features, are shared by all views to construct a common latent space, coupling the views by learning projections from view-specific spaces.

Clustering

Fast and Scalable Lasso via Stochastic Frank-Wolfe Methods with a Convergence Guarantee

1 code implementation24 Oct 2015 Emanuele Frandi, Ricardo Nanculef, Stefano Lodi, Claudio Sartori, Johan A. K. Suykens

Frank-Wolfe (FW) algorithms have been often proposed over the last few years as efficient solvers for a variety of optimization problems arising in the field of Machine Learning.

Parallelized Tensor Train Learning of Polynomial Classifiers

1 code implementation20 Dec 2016 Zhongming Chen, Kim Batselier, Johan A. K. Suykens, Ngai Wong

In pattern classification, polynomial classifiers are well-studied methods as they are capable of generating complex decision surfaces.

General Classification

Wasserstein Exponential Kernels

1 code implementation5 Feb 2020 Henri De Plaen, Michaël Fanuel, Johan A. K. Suykens

In the context of kernel methods, the similarity between data points is encoded by the kernel function which is often defined thanks to the Euclidean distance, a common example being the squared exponential kernel.

Outlier detection in non-elliptical data by kernel MRCD

1 code implementation5 Aug 2020 Joachim Schreurs, Iwein Vranckx, Mia Hubert, Johan A. K. Suykens, Peter J. Rousseeuw

The minimum regularized covariance determinant method (MRCD) is a robust estimator for multivariate location and scatter, which detects outliers by fitting a robust covariance matrix to the data.

Outlier Detection

Deep Kernel Principal Component Analysis for Multi-level Feature Learning

1 code implementation22 Feb 2023 Francesco Tonin, Qinghua Tao, Panagiotis Patrinos, Johan A. K. Suykens

Principal Component Analysis (PCA) and its nonlinear extension Kernel PCA (KPCA) are widely used across science and industry for data analysis and dimensionality reduction.

Dimensionality Reduction

Extending Kernel PCA through Dualization: Sparsity, Robustness and Fast Algorithms

1 code implementation9 Jun 2023 Francesco Tonin, Alex Lambert, Panagiotis Patrinos, Johan A. K. Suykens

The goal of this paper is to revisit Kernel Principal Component Analysis (KPCA) through dualization of a difference of convex functions.

Positive semi-definite embedding for dimensionality reduction and out-of-sample extensions

1 code implementation20 Nov 2017 Michaël Fanuel, Antoine Aspeel, Jean-Charles Delvenne, Johan A. K. Suykens

In machine learning or statistics, it is often desirable to reduce the dimensionality of a sample of data points in a high dimensional space $\mathbb{R}^d$.

Dimensionality Reduction

Duality in Multi-View Restricted Kernel Machines

2 code implementations26 May 2023 Sonny Achten, Arun Pandey, Hannes De Meulemeester, Bart De Moor, Johan A. K. Suykens

We propose a unifying setting that combines existing restricted kernel machine methods into a single primal-dual multi-view framework for kernel principal component analysis in both supervised and unsupervised settings.

Time Series

Robust Classification of Graph-Based Data

no code implementations21 Dec 2016 Carlos M. Alaíz, Michaël Fanuel, Johan A. K. Suykens

A graph-based classification method is proposed for semi-supervised learning in the case of Euclidean data and for classification in the case of graph data.

Classification General Classification +2

Modified Frank-Wolfe Algorithm for Enhanced Sparsity in Support Vector Machine Classifiers

no code implementations19 Jun 2017 Carlos M. Alaíz, Johan A. K. Suykens

This work proposes a new algorithm for training a re-weighted L2 Support Vector Machine (SVM), inspired on the re-weighted Lasso algorithm of Cand\`es et al. and on the equivalence between Lasso and SVM shown recently by Jaggi.

Solving $\ell^p\!$-norm regularization with tensor kernels

no code implementations18 Jul 2017 Saverio Salzo, Johan A. K. Suykens, Lorenzo Rosasco

In this paper, we discuss how a suitable family of tensor kernels can be used to efficiently solve nonparametric extensions of $\ell^p$ regularized learning methods.

Indefinite Kernel Logistic Regression with Concave-inexact-convex Procedure

no code implementations6 Jul 2017 Fanghui Liu, Xiaolin Huang, Chen Gong, Jie Yang, Johan A. K. Suykens

Since the concave-convex procedure has to solve a sub-problem in each iteration, we propose a concave-inexact-convex procedure (CCICP) algorithm with an inexact solving scheme to accelerate the solving process.

regression

Generalized support vector regression: duality and tensor-kernel representation

no code implementations18 Mar 2016 Saverio Salzo, Johan A. K. Suykens

In this paper we study the variational problem associated to support vector regression in Banach function spaces.

regression

A Statistical Learning Approach to Modal Regression

no code implementations20 Feb 2017 Yunlong Feng, Jun Fan, Johan A. K. Suykens

However, it outperforms these regression models in terms of robustness as shown in our study from a re-descending M-estimation view.

regression

Kernel Density Estimation for Dynamical Systems

no code implementations13 Jul 2016 Hanyuan Hang, Ingo Steinwart, Yunlong Feng, Johan A. K. Suykens

We study the density estimation problem with observations generated by certain dynamical systems that admit a unique underlying invariant Lebesgue density.

Density Estimation

Learning theory estimates with observations from general stationary stochastic processes

no code implementations10 May 2016 Hanyuan Hang, Yunlong Feng, Ingo Steinwart, Johan A. K. Suykens

We show that when the stochastic processes satisfy a generalized Bernstein-type inequality, a unified treatment on analyzing the learning schemes with various mixing processes can be conducted and a sharp oracle inequality for generic regularized empirical risk minimization schemes can be established.

Learning Theory

Kernel Spectral Clustering and applications

no code implementations3 May 2015 Rocco Langone, Raghvendra Mall, Carlos Alzate, Johan A. K. Suykens

This is a major advantage compared to classical spectral clustering where the determination of the clustering parameters is unclear and relies on heuristics.

Clustering Image Segmentation +3

Higher order Matching Pursuit for Low Rank Tensor Learning

no code implementations7 Mar 2015 Yuning Yang, Siamak Mehrkanoon, Johan A. K. Suykens

In this paper, we propose higher order matching pursuit for low rank tensor learning problems with a convex or a nonconvex cost function, which is a generalization of the matching pursuit type methods.

Learning Tensors in Reproducing Kernel Hilbert Spaces with Multilinear Spectral Penalties

no code implementations18 Oct 2013 Marco Signoretto, Lieven De Lathauwer, Johan A. K. Suykens

We present a general framework to learn functions in tensor product reproducing kernel Hilbert spaces (TP-RKHSs).

Transfer Learning

Generalization Properties of hyper-RKHS and its Applications

no code implementations26 Sep 2018 Fanghui Liu, Lei Shi, Xiaolin Huang, Jie Yang, Johan A. K. Suykens

This paper generalizes regularized regression problems in a hyper-reproducing kernel Hilbert space (hyper-RKHS), illustrates its utility for kernel learning and out-of-sample extensions, and proves asymptotic convergence results for the introduced regression models in an approximation theory view.

Learning Theory regression

Spatio-temporal Stacked LSTM for Temperature Prediction in Weather Forecasting

no code implementations15 Nov 2018 Zahra Karevan, Johan A. K. Suykens

Subsequently, the input of the second LSTM layer is formed based on the combination of the hidden states of the first layer LSTM models.

Time Series Time Series Prediction +1

Two-stage Best-scored Random Forest for Large-scale Regression

no code implementations9 May 2019 Hanyuan Hang, Yingyi Chen, Johan A. K. Suykens

We propose a novel method designed for large-scale regression problems, namely the two-stage best-scored random forest (TBRF).

Computational Efficiency Ensemble Learning +2

Efficient hinging hyperplanes neural network and its application in nonlinear system identification

no code implementations15 May 2019 Jun Xu, Qinghua Tao, Zhen Li, Xiangming Xi, Johan A. K. Suykens, Shuning Wang

It is proved that for every EHH neural network, there is an equivalent adaptive hinging hyperplanes (AHH) tree, which was also proposed based on the model of HH and find good applications in system identification.

regression Variable Selection

Nyström landmark sampling and regularized Christoffel functions

no code implementations29 May 2019 Michaël Fanuel, Joachim Schreurs, Johan A. K. Suykens

In this context, we propose a deterministic and a randomized adaptive algorithm for selecting landmark points within a training data set.

Point Processes

Generative Restricted Kernel Machines: A Framework for Multi-view Generation and Disentangled Feature Learning

no code implementations19 Jun 2019 Arun Pandey, Joachim Schreurs, Johan A. K. Suykens

This paper introduces a novel framework for generative models based on Restricted Kernel Machines (RKMs) with joint multi-view generation and uncorrelated feature learning, called Gen-RKM.

Random Fourier Features via Fast Surrogate Leverage Weighted Sampling

no code implementations20 Nov 2019 Fanghui Liu, Xiaolin Huang, Yudong Chen, Jie Yang, Johan A. K. Suykens

In this paper, we propose a fast surrogate leverage weighted sampling strategy to generate refined random Fourier features for kernel approximation.

Robust Generative Restricted Kernel Machines using Weighted Conjugate Feature Duality

no code implementations4 Feb 2020 Arun Pandey, Joachim Schreurs, Johan A. K. Suykens

Experiments show that the weighted RKM is capable of generating clean images when contamination is present in the training data.

Diversity sampling is an implicit regularization for kernel methods

no code implementations20 Feb 2020 Michaël Fanuel, Joachim Schreurs, Johan A. K. Suykens

The Nystr\"om approximation -- based on a subset of landmarks -- gives a low rank approximation of the kernel matrix, and is known to provide a form of implicit regularization.

Point Processes regression

Random Features for Kernel Approximation: A Survey on Algorithms, Theory, and Beyond

no code implementations23 Apr 2020 Fanghui Liu, Xiaolin Huang, Yudong Chen, Johan A. K. Suykens

This survey may serve as a gentle introduction to this topic, and as a users' guide for practitioners interested in applying the representative algorithms and understanding theoretical results under various technical assumptions.

Fast Learning in Reproducing Kernel Krein Spaces via Signed Measures

no code implementations30 May 2020 Fanghui Liu, Xiaolin Huang, Yingyi Chen, Johan A. K. Suykens

In this paper, we attempt to solve a long-lasting open question for non-positive definite (non-PD) kernels in machine learning community: can a given non-PD kernel be decomposed into the difference of two PD kernels (termed as positive decomposition)?

Open-Ended Question Answering

Analysis of Regularized Least Squares in Reproducing Kernel Krein Spaces

no code implementations1 Jun 2020 Fanghui Liu, Lei Shi, Xiaolin Huang, Jie Yang, Johan A. K. Suykens

In this paper, we study the asymptotic properties of regularized least squares with indefinite kernels in reproducing kernel Krein spaces (RKKS).

Disentangled Representation Learning and Generation with Manifold Optimization

no code implementations12 Jun 2020 Arun Pandey, Michael Fanuel, Joachim Schreurs, Johan A. K. Suykens

Our analysis shows that such a construction promotes disentanglement by matching the principal directions in the latent space with the directions of orthogonal variation in data space.

Disentanglement Stochastic Optimization

The Bures Metric for Generative Adversarial Networks

no code implementations16 Jun 2020 Hannes De Meulemeester, Joachim Schreurs, Michaël Fanuel, Bart De Moor, Johan A. K. Suykens

However, under certain circumstances, the training of GANs can lead to mode collapse or mode dropping, i. e. the generative models not being able to sample from the entire probability distribution.

Ensemble Kernel Methods, Implicit Regularization and Determinantal Point Processes

no code implementations24 Jun 2020 Joachim Schreurs, Michaël Fanuel, Johan A. K. Suykens

By using the framework of Determinantal Point Processes (DPPs), some theoretical results concerning the interplay between diversity and regularization can be obtained.

Point Processes regression

Kernel regression in high dimensions: Refined analysis beyond double descent

no code implementations6 Oct 2020 Fanghui Liu, Zhenyu Liao, Johan A. K. Suykens

In this paper, we provide a precise characterization of generalization properties of high dimensional kernel ridge regression across the under- and over-parameterized regimes, depending on whether the number of training data n exceeds the feature dimension d. By establishing a bias-variance decomposition of the expected excess risk, we show that, while the bias is (almost) independent of d and monotonically decreases with n, the variance depends on n, d and can be unimodal or monotonically decreasing under different regularization schemes.

regression Vocal Bursts Intensity Prediction

Towards a Unified Quadrature Framework for Large-Scale Kernel Machines

no code implementations3 Nov 2020 Fanghui Liu, Xiaolin Huang, Yudong Chen, Johan A. K. Suykens

In this paper, we develop a quadrature framework for large-scale kernel machines via a numerical integration representation.

Numerical Integration

Determinantal Point Processes Implicitly Regularize Semi-parametric Regression Problems

no code implementations13 Nov 2020 Michaël Fanuel, Joachim Schreurs, Johan A. K. Suykens

Semi-parametric regression models are used in several applications which require comprehensibility without sacrificing accuracy.

Geophysics Point Processes +3

Leverage Score Sampling for Complete Mode Coverage in Generative Adversarial Networks

no code implementations6 Apr 2021 Joachim Schreurs, Hannes De Meulemeester, Michaël Fanuel, Bart De Moor, Johan A. K. Suykens

A generative model may overlook underrepresented modes that are less frequent in the empirical data distribution.

Latent Space Exploration Using Generative Kernel PCA

no code implementations28 May 2021 David Winant, Joachim Schreurs, Johan A. K. Suykens

This connection has led to insights on how to use kernel PCA in a generative procedure, called generative kernel PCA.

Novelty Detection

Towards Deterministic Diverse Subset Sampling

no code implementations28 May 2021 Joachim Schreurs, Michaël Fanuel, Johan A. K. Suykens

Determinantal point processes (DPPs) are well known models for diverse subset selection problems, including recommendation tasks, document summarization and image search.

Document Summarization Image Retrieval +1

On the Double Descent of Random Features Models Trained with SGD

no code implementations13 Oct 2021 Fanghui Liu, Johan A. K. Suykens, Volkan Cevher

We study generalization properties of random features (RF) regression in high dimensions optimized by stochastic gradient descent (SGD) in under-/over-parameterized regime.

regression

Tensor Network Kalman Filtering for Large-Scale LS-SVMs

no code implementations26 Oct 2021 Maximilian Lucassen, Johan A. K. Suykens, Kim Batselier

Least squares support vector machines are a commonly used supervised learning method for nonlinear regression and classification.

regression Tensor Networks

Learning with Asymmetric Kernels: Least Squares and Feature Interpretation

no code implementations3 Feb 2022 Mingzhen He, Fan He, Lei Shi, Xiaolin Huang, Johan A. K. Suykens

Asymmetric kernels naturally exist in real life, e. g., for conditional probability and directed graphs.

Piecewise Linear Neural Networks and Deep Learning

no code implementations18 Jun 2022 Qinghua Tao, Li Li, Xiaolin Huang, Xiangming Xi, Shuning Wang, Johan A. K. Suykens

To apply PWLNN methods, both the representation and the learning have long been studied.

Multi-view Kernel PCA for Time series Forecasting

no code implementations24 Jan 2023 Arun Pandey, Hannes De Meulemeester, Bart De Moor, Johan A. K. Suykens

In this paper, we propose a kernel principal component analysis model for multi-variate time series forecasting, where the training and prediction schemes are derived from the multi-view formulation of Restricted Kernel Machines.

Time Series Time Series Forecasting

Tensorized LSSVMs for Multitask Regression

no code implementations4 Mar 2023 Jiani Liu, Qinghua Tao, Ce Zhu, Yipeng Liu, Johan A. K. Suykens

Multitask learning (MTL) can utilize the relatedness between multiple tasks for performance improvement.

regression

Nonlinear SVD with Asymmetric Kernels: feature learning and asymmetric Nyström method

no code implementations12 Jun 2023 Qinghua Tao, Francesco Tonin, Panagiotis Patrinos, Johan A. K. Suykens

We describe a nonlinear extension of the matrix Singular Value Decomposition through asymmetric kernels, namely KSVD.

Combining Primal and Dual Representations in Deep Restricted Kernel Machines Classifiers

no code implementations12 Jun 2023 Francesco Tonin, Panagiotis Patrinos, Johan A. K. Suykens

In the context of deep learning with kernel machines, the deep Restricted Kernel Machine (DRKM) framework allows multiple levels of kernel PCA (KPCA) and Least-Squares Support Vector Machines (LSSVM) to be combined into a deep architecture using visible and hidden units.

Classification

A Dual Formulation for Probabilistic Principal Component Analysis

no code implementations19 Jul 2023 Henri De Plaen, Johan A. K. Suykens

In this paper, we characterize Probabilistic Principal Component Analysis in Hilbert spaces and demonstrate how the optimal solution admits a representation in dual space.

Low-Rank Multitask Learning based on Tensorized SVMs and LSSVMs

1 code implementation30 Aug 2023 Jiani Liu, Qinghua Tao, Ce Zhu, Yipeng Liu, Xiaolin Huang, Johan A. K. Suykens

In contrast to previous MTL frameworks, our decision function in the dual induces a weighted kernel function with a task-coupling term characterized by the similarities of the task-specific factors, better revealing the explicit relations across tasks in MTL.

Enhancing Kernel Flexibility via Learning Asymmetric Locally-Adaptive Kernels

1 code implementation8 Oct 2023 Fan He, Mingzhen He, Lei Shi, Xiaolin Huang, Johan A. K. Suykens

To enhance kernel flexibility, this paper introduces the concept of Locally-Adaptive-Bandwidths (LAB) as trainable parameters to enhance the Radial Basis Function (RBF) kernel, giving rise to the LAB RBF kernel.

regression

Nonlinear functional regression by functional deep neural network with kernel embedding

no code implementations5 Jan 2024 Zhongjie Shi, Jun Fan, Linhao Song, Ding-Xuan Zhou, Johan A. K. Suykens

With the rapid development of deep learning in various fields of science and technology, such as speech recognition, image classification, and natural language processing, recently it is also widely applied in the functional data analysis (FDA) with some empirical success.

Dimensionality Reduction Image Classification +3

Can overfitted deep neural networks in adversarial training generalize? -- An approximation viewpoint

no code implementations24 Jan 2024 Zhongjie Shi, Fanghui Liu, Yuan Cao, Johan A. K. Suykens

Adversarial training is a widely used method to improve the robustness of deep neural networks (DNNs) over adversarial perturbations.

Self-Attention through Kernel-Eigen Pair Sparse Variational Gaussian Processes

no code implementations2 Feb 2024 Yingyi Chen, Qinghua Tao, Francesco Tonin, Johan A. K. Suykens

In this work, we propose Kernel-Eigen Pair Sparse Variational Gaussian Processes (KEP-SVGP) for building uncertainty-aware self-attention where the asymmetry of attention kernels is tackled by Kernel SVD (KSVD) and a reduced complexity is acquired.

Gaussian Processes Variational Inference

Sparsity via Sparse Group $k$-max Regularization

no code implementations13 Feb 2024 Qinghua Tao, Xiangming Xi, Jun Xu, Johan A. K. Suykens

For the linear inverse problem with sparsity constraints, the $l_0$ regularized problem is NP-hard, and existing approaches either utilize greedy algorithms to find almost-optimal solutions or to approximate the $l_0$ regularization with its convex counterparts.

Cannot find the paper you are looking for? You can Submit a new open access paper.