You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 4 Feb 2024 • Yuka Hashimoto, Masahiro Ikeda, Hachem Kadri

Machine learning has a long collaborative tradition with several fields of mathematics, such as statistics, probability and linear algebra.

no code implementations • 11 Oct 2023 • Nizar Demni, Hachem Kadri

Random features have been introduced to scale up kernel methods via randomization techniques.

no code implementations • 15 Jun 2023 • Balthazar Casalé, Giuseppe Di Molfetta, Sandrine Anthoine, Hachem Kadri

The quantum separability problem consists in deciding whether a bipartite density matrix is entangled or separable.

no code implementations • 21 Oct 2022 • Yuka Hashimoto, Masahiro Ikeda, Hachem Kadri

Supervised learning in reproducing kernel Hilbert space (RKHS) and vector-valued RKHS (vvRKHS) has been investigated for more than 30 years.

no code implementations • 18 Jul 2022 • Kais Hariz, Hachem Kadri, Stéphane Ayache, Maher Moakher, Thierry Artières

We study the implicit regularization effects of deep learning in tensor factorization.

1 code implementation • 27 Aug 2021 • Riikka Huusari, Sahely Bhadra, Cécile Capponi, Hachem Kadri, Juho Rousu

In this paper, instead of using the traditional representer theorem, we propose to search for a solution in RKHS that has a pre-image decomposition in the original data space, where the elements don't necessarily correspond to the elements in the training set.

1 code implementation • 4 Jun 2021 • Mathieu Roget, Giuseppe Di Molfetta, Hachem Kadri

Quantum machine learning algorithms could provide significant speed-ups over their classical counterparts; however, whether they could also achieve good generalization remains unclear.

no code implementations • 4 May 2021 • Paolo Milanesi, Hachem Kadri, Stéphane Ayache, Thierry Artières

Attempts of studying implicit regularization associated to gradient descent (GD) have identified matrix completion as a suitable test-bed.

no code implementations • 14 Jan 2021 • Riikka Huusari, Hachem Kadri

We consider the problem of operator-valued kernel learning and investigate the possibility of going beyond the well-known separable kernels.

no code implementations • 1 Jan 2021 • Luc Giffon, Hachem Kadri, Stephane Ayache, Ronan Sicre, Thierry Artieres

Over-parameterization of neural networks is a well known issue that comes along with their great performance.

1 code implementation • ICML 2020 • Hachem Kadri, Stéphane Ayache, Riikka Huusari, Alain Rakotomamonjy, Liva Ralaivola

The trace regression model, a direct extension of the well-studied linear regression model, allows one to map matrices to real-valued outputs.

no code implementations • 1 Apr 2020 • Akrem Sellami, François-Xavier Dupé, Bastien Cagna, Hachem Kadri, Stéphane Ayache, Thierry Artières, Sylvain Takerkart

In neuroscience, understanding inter-individual differences has recently emerged as a major challenge, for which functional magnetic resonance imaging (fMRI) has proven invaluable.

no code implementations • 15 Feb 2020 • Balthazar Casalé, Giuseppe Di Molfetta, Hachem Kadri, Liva Ralaivola

We consider the quantum version of the bandit problem known as {\em best arm identification} (BAI).

no code implementations • 29 Nov 2019 • Luc Giffon, Stéphane Ayache, Thierry Artières, Hachem Kadri

Recent work has focused on combining kernel methods and deep learning to exploit the best of the two approaches.

no code implementations • 14 Oct 2019 • Riikka Huusari, Cécile Capponi, Paul Villoutreix, Hachem Kadri

We consider the kernel completion problem with the presence of multiple views in the data.

no code implementations • 23 Aug 2019 • Luc Giffon, Valentin Emiya, Liva Ralaivola, Hachem Kadri

K-means -- and the celebrated Lloyd algorithm -- is more than the clustering method it was originally designed to be.

no code implementations • 21 Mar 2018 • Riikka Huusari, Hachem Kadri, Cécile Capponi

We consider the problem of metric learning for multi-view data and present a novel method for learning within-view as well as between-view metrics in vector-valued kernel spaces, as a way to capture multi-modal structure of the data.

no code implementations • NeurIPS 2016 • Guillaume Rabusseau, Hachem Kadri

This paper proposes an efficient algorithm (HOLRR) to handle regression tasks where the outputs have a tensor structure.

no code implementations • 22 Feb 2016 • Guillaume Rabusseau, Hachem Kadri

This paper proposes an efficient algorithm (HOLRR) to handle regression tasks where the outputs have a tensor structure.

no code implementations • 28 Oct 2015 • Hachem Kadri, Emmanuel Duflos, Philippe Preux, Stéphane Canu, Alain Rakotomamonjy, Julien Audiffren

In this paper we consider the problems of supervised classification and regression in the case where attributes and labels are functions: a data is represented by a set of functions, and the label is also a function.

no code implementations • 10 Jun 2014 • Julien Audiffren, Hachem Kadri

The purpose of this paper is to introduce a concept of equivalence between machine learning algorithms.

no code implementations • 1 Nov 2013 • Julien Audiffren, Hachem Kadri

We consider the problem of learning a vector-valued function f in an online learning setting.

no code implementations • 9 Oct 2013 • Julien Audiffren, Hachem Kadri

Regularization is used to find a solution that both fits the data and is sufficiently smooth, and thereby is very effective for designing and refining learning algorithms.

no code implementations • 17 Jun 2013 • Julien Audiffren, Hachem Kadri

We show that multi-task kernel regression algorithms are uniformly stable in the general case of infinite-dimensional output spaces.

no code implementations • NeurIPS 2012 • Hachem Kadri, Alain Rakotomamonjy, Philippe Preux, Francis R. Bach

We study this problem in the case of kernel ridge regression for functional responses with an lr-norm constraint on the combination coefficients.

no code implementations • 10 May 2012 • Hachem Kadri, Mohammad Ghavamzadeh, Philippe Preux

Finally, we evaluate the performance of our KDE approach using both covariance and conditional covariance kernels on two structured output problems, and compare it to the state-of-the-art kernel-based structured output regression methods.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.