You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 26 Oct 2021 • Maximilian Lucassen, Johan A. K. Suykens, Kim Batselier

Least squares support vector machines are a commonly used supervised learning method for nonlinear regression and classification.

no code implementations • 13 Oct 2021 • Fanghui Liu, Johan A. K. Suykens, Volkan Cevher

We study generalization properties of random features (RF) regression in high dimensions optimized by stochastic gradient descent (SGD).

no code implementations • 28 May 2021 • Joachim Schreurs, Michaël Fanuel, Johan A. K. Suykens

Determinantal point processes (DPPs) are well known models for diverse subset selection problems, including recommendation tasks, document summarization and image search.

no code implementations • 28 May 2021 • David Winant, Joachim Schreurs, Johan A. K. Suykens

This connection has led to insights on how to use kernel PCA in a generative procedure, called generative kernel PCA.

no code implementations • 28 Apr 2021 • Yingyi Chen, Xi Shen, Shell Xu Hu, Johan A. K. Suykens

On Clothing1M, our approach obtains 74. 9% accuracy which is slightly better than that of DivideMix.

Ranked #2 on Learning with noisy labels on ANIMAL

no code implementations • 6 Apr 2021 • Joachim Schreurs, Hannes De Meulemeester, Michaël Fanuel, Bart De Moor, Johan A. K. Suykens

A generative model may overlook underrepresented modes that are less frequent in the empirical data distribution.

1 code implementation • 16 Feb 2021 • Francesco Tonin, Arun Pandey, Panagiotis Patrinos, Johan A. K. Suykens

Detecting out-of-distribution (OOD) samples is an essential requirement for the deployment of machine learning systems in the real world.

no code implementations • 25 Nov 2020 • Francesco Tonin, Panagiotis Patrinos, Johan A. K. Suykens

We introduce Constr-DRKM, a deep kernel method for the unsupervised learning of disentangled data representations.

no code implementations • 13 Nov 2020 • Michaël Fanuel, Joachim Schreurs, Johan A. K. Suykens

Semi-parametric regression models are used in several applications which require comprehensibility without sacrificing accuracy.

no code implementations • 3 Nov 2020 • Fanghui Liu, Xiaolin Huang, Yudong Chen, Johan A. K. Suykens

In this paper, we develop a quadrature framework for large-scale kernel machines via a numerical integration representation.

no code implementations • 6 Oct 2020 • Fanghui Liu, Zhenyu Liao, Johan A. K. Suykens

In this paper, we provide a precise characterization of generalization properties of high dimensional kernel ridge regression across the under- and over-parameterized regimes, depending on whether the number of training data n exceeds the feature dimension d. By establishing a bias-variance decomposition of the expected excess risk, we show that, while the bias is (almost) independent of d and monotonically decreases with n, the variance depends on n, d and can be unimodal or monotonically decreasing under different regularization schemes.

1 code implementation • 5 Aug 2020 • Joachim Schreurs, Iwein Vranckx, Mia Hubert, Johan A. K. Suykens, Peter J. Rousseeuw

The minimum regularized covariance determinant method (MRCD) is a robust estimator for multivariate location and scatter, which detects outliers by fitting a robust covariance matrix to the data.

2 code implementations • NeurIPS 2020 • Alexander Meulemans, Francesco S. Carzaniga, Johan A. K. Suykens, João Sacramento, Benjamin F. Grewe

Here, we analyze target propagation (TP), a popular but not yet fully understood alternative to BP, from the standpoint of mathematical optimization.

no code implementations • 24 Jun 2020 • Joachim Schreurs, Michaël Fanuel, Johan A. K. Suykens

By using the framework of Determinantal Point Processes (DPPs), some theoretical results concerning the interplay between diversity and regularization can be obtained.

no code implementations • 16 Jun 2020 • Hannes De Meulemeester, Joachim Schreurs, Michaël Fanuel, Bart De Moor, Johan A. K. Suykens

However, under certain circumstances, the training of GANs can lead to mode collapse or mode dropping, i. e. the generative models not being able to sample from the entire probability distribution.

no code implementations • 12 Jun 2020 • Arun Pandey, Michael Fanuel, Joachim Schreurs, Johan A. K. Suykens

Our analysis shows that such a construction promotes disentanglement by matching the principal directions in latent space with the directions of orthogonal variation in data space.

no code implementations • 1 Jun 2020 • Fanghui Liu, Lei Shi, Xiaolin Huang, Jie Yang, Johan A. K. Suykens

In this paper, we study the asymptotic properties of regularized least squares with indefinite kernels in reproducing kernel Krein spaces (RKKS).

no code implementations • 30 May 2020 • Fanghui Liu, Xiaolin Huang, Yingyi Chen, Johan A. K. Suykens

In this paper, we attempt to solve a long-lasting open question for non-positive definite (non-PD) kernels in machine learning community: can a given non-PD kernel be decomposed into the difference of two PD kernels (termed as positive decomposition)?

no code implementations • 23 Apr 2020 • Fanghui Liu, Xiaolin Huang, Yudong Chen, Johan A. K. Suykens

This survey may serve as a gentle introduction to this topic, and as a users' guide for practitioners interested in applying the representative algorithms and understanding theoretical results under various technical assumptions.

no code implementations • 20 Feb 2020 • Michaël Fanuel, Joachim Schreurs, Johan A. K. Suykens

The Nystr\"om approximation -- based on a subset of landmarks -- gives a low rank approximation of the kernel matrix, and is known to provide a form of implicit regularization.

2 code implementations • 5 Feb 2020 • Henri De Plaen, Michaël Fanuel, Johan A. K. Suykens

In the context of kernel methods, the similarity between data points is encoded by the kernel function which is often defined thanks to the Euclidean distance, a common example being the squared exponential kernel.

no code implementations • 4 Feb 2020 • Arun Pandey, Joachim Schreurs, Johan A. K. Suykens

Experiments show that the weighted RKM is capable of generating clean images when contamination is present in the training data.

no code implementations • 20 Nov 2019 • Fanghui Liu, Xiaolin Huang, Yudong Chen, Jie Yang, Johan A. K. Suykens

In this paper, we propose a fast surrogate leverage weighted sampling strategy to generate refined random Fourier features for kernel approximation.

no code implementations • 19 Jun 2019 • Arun Pandey, Joachim Schreurs, Johan A. K. Suykens

This paper introduces a novel framework for generative models based on Restricted Kernel Machines (RKMs) with joint multi-view generation and uncorrelated feature learning, called Gen-RKM.

no code implementations • 29 May 2019 • Michaël Fanuel, Joachim Schreurs, Johan A. K. Suykens

Selecting diverse and important items, called landmarks, from a large set is a problem of interest in machine learning.

no code implementations • 15 May 2019 • Jun Xu, Qinghua Tao, Zhen Li, Xiangming Xi, Johan A. K. Suykens, Shuning Wang

It is proved that for every EHH neural network, there is an equivalent adaptive hinging hyperplanes (AHH) tree, which was also proposed based on the model of HH and find good applications in system identification.

no code implementations • 9 May 2019 • Hanyuan Hang, Yingyi Chen, Johan A. K. Suykens

We propose a novel method designed for large-scale regression problems, namely the two-stage best-scored random forest (TBRF).

no code implementations • 15 Nov 2018 • Zahra Karevan, Johan A. K. Suykens

Subsequently, the input of the second LSTM layer is formed based on the combination of the hidden states of the first layer LSTM models.

no code implementations • 26 Sep 2018 • Fanghui Liu, Lei Shi, Xiaolin Huang, Jie Yang, Johan A. K. Suykens

This paper generalizes regularized regression problems in a hyper-reproducing kernel Hilbert space (hyper-RKHS), illustrates its utility for kernel learning and out-of-sample extensions, and proves asymptotic convergence results for the introduced regression models in an approximation theory view.

no code implementations • 20 Nov 2017 • Michaël Fanuel, Antoine Aspeel, Jean-Charles Delvenne, Johan A. K. Suykens

In machine learning or statistics, it is often desirable to reduce the dimensionality of a sample of data points in a high dimensional space $\mathbb{R}^d$.

no code implementations • 18 Jul 2017 • Saverio Salzo, Johan A. K. Suykens, Lorenzo Rosasco

In this paper, we discuss how a suitable family of tensor kernels can be used to efficiently solve nonparametric extensions of $\ell^p$ regularized learning methods.

no code implementations • 6 Jul 2017 • Fanghui Liu, Xiaolin Huang, Chen Gong, Jie Yang, Johan A. K. Suykens

Since the concave-convex procedure has to solve a sub-problem in each iteration, we propose a concave-inexact-convex procedure (CCICP) algorithm with an inexact solving scheme to accelerate the solving process.

no code implementations • 19 Jun 2017 • Carlos M. Alaíz, Johan A. K. Suykens

This work proposes a new algorithm for training a re-weighted L2 Support Vector Machine (SVM), inspired on the re-weighted Lasso algorithm of Cand\`es et al. and on the equivalence between Lasso and SVM shown recently by Jaggi.

no code implementations • 20 Feb 2017 • Yunlong Feng, Jun Fan, Johan A. K. Suykens

However, it outperforms these regression models in terms of robustness as shown in our study from a re-descending M-estimation view.

no code implementations • 21 Dec 2016 • Carlos M. Alaíz, Michaël Fanuel, Johan A. K. Suykens

A graph-based classification method is proposed for semi-supervised learning in the case of Euclidean data and for classification in the case of graph data.

1 code implementation • 20 Dec 2016 • Zhongming Chen, Kim Batselier, Johan A. K. Suykens, Ngai Wong

In pattern classification, polynomial classifiers are well-studied methods as they are capable of generating complex decision surfaces.

no code implementations • 21 Oct 2016 • Carlos M. Alaíz, Michaël Fanuel, Johan A. K. Suykens

In this paper, Kernel PCA is reinterpreted as the solution to a convex optimization problem.

no code implementations • 13 Jul 2016 • Hanyuan Hang, Ingo Steinwart, Yunlong Feng, Johan A. K. Suykens

We study the density estimation problem with observations generated by certain dynamical systems that admit a unique underlying invariant Lebesgue density.

no code implementations • 10 May 2016 • Hanyuan Hang, Yunlong Feng, Ingo Steinwart, Johan A. K. Suykens

We show that when the stochastic processes satisfy a generalized Bernstein-type inequality, a unified treatment on analyzing the learning schemes with various mixing processes can be conducted and a sharp oracle inequality for generic regularized empirical risk minimization schemes can be established.

no code implementations • 18 Mar 2016 • Saverio Salzo, Johan A. K. Suykens

In this paper we study the variational problem associated to support vector regression in Banach function spaces.

1 code implementation • 24 Oct 2015 • Emanuele Frandi, Ricardo Nanculef, Stefano Lodi, Claudio Sartori, Johan A. K. Suykens

Frank-Wolfe (FW) algorithms have been often proposed over the last few years as efficient solvers for a variety of optimization problems arising in the field of Machine Learning.

no code implementations • 14 May 2015 • Xiaolin Huang, Lei Shi, Ming Yan, Johan A. K. Suykens

The one-sided $\ell_1$ loss and the linear loss are two popular loss functions for 1bit-CS.

no code implementations • 3 May 2015 • Rocco Langone, Raghvendra Mall, Carlos Alzate, Johan A. K. Suykens

This is a major advantage compared to classical spectral clustering where the determination of the clustering parameters is unclear and relies on heuristics.

no code implementations • 7 Mar 2015 • Yuning Yang, Siamak Mehrkanoon, Johan A. K. Suykens

In this paper, we propose higher order matching pursuit for low rank tensor learning problems with a convex or a nonconvex cost function, which is a generalization of the matching pursuit type methods.

no code implementations • 5 Feb 2015 • Emanuele Frandi, Ricardo Nanculef, Johan A. K. Suykens

Frank-Wolfe algorithms have recently regained the attention of the Machine Learning community.

1 code implementation • 4 Mar 2014 • Marc Claesen, Frank De Smet, Johan A. K. Suykens, Bart De Moor

We present an approximation scheme for support vector machine models that use an RBF kernel.

1 code implementation • 13 Feb 2014 • Marc Claesen, Frank De Smet, Johan A. K. Suykens, Bart De Moor

The included benchmark comprises three settings with increasing label noise: (i) fully supervised, (ii) PU learning and (iii) PU learning with false positives.

no code implementations • 18 Oct 2013 • Marco Signoretto, Lieven De Lathauwer, Johan A. K. Suykens

We present a general framework to learn functions in tensor product reproducing kernel Hilbert spaces (TP-RKHSs).

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.