no code implementations • 10 May 2024 • Florent Bouchard, Ammar Mian, Malik Tiomoko, Guillaume Ginolhac, Frédéric Pascal
In this study, we consider the realm of covariance matrices in machine learning, particularly focusing on computing Fr\'echet means on the manifold of symmetric positive definite matrices, commonly referred to as Karcher or geometric means.
no code implementations • 20 Oct 2023 • Vasilii Feofanov, Malik Tiomoko, Aladin Virmaux
As an application, we derive a hyperparameter selection policy that finds the best balance between the supervised and the unsupervised terms of our learning criterion.
no code implementations • 1 Nov 2021 • Malik Tiomoko, Romain Couillet, Frédéric Pascal
The article proposes and theoretically analyses a \emph{computationally efficient} multi-task learning (MTL) extension of popular principal component analysis (PCA)-based supervised learning schemes \cite{barshan2011supervised, bair2006prediction}.
1 code implementation • 9 Oct 2021 • Sami Fakhry, Romain Couillet, Malik Tiomoko
This article proposes a distributed multi-task learning (MTL) algorithm based on supervised principal component analysis (SPCA) which is: (i) theoretically optimal for Gaussian mixtures, (ii) computationally cheap and scalable.
no code implementations • ICLR 2021 • Malik Tiomoko, Hafiz Tiomoko Ali, Romain Couillet
This article provides theoretical insights into the inner workings of multi-task and transfer learning methods, by studying the tractable least-square support vector machine multi-task learning (LS-SVM MTL) method, in the limit of large ($p$) and numerous ($n$) data.
no code implementations • 3 Sep 2020 • Malik Tiomoko, Romain Couillet, Hafiz Tiomoko
Multi Task Learning (MTL) efficiently leverages useful information contained in multiple related tasks to help improve the generalization performance of all tasks.
1 code implementation • 8 Mar 2019 • Malik Tiomoko, Romain Couillet
This article proposes a method to consistently estimate functionals $\frac1p\sum_{i=1}^pf(\lambda_i(C_1C_2))$ of the eigenvalues of the product of two covariance matrices $C_1, C_2\in\mathbb{R}^{p\times p}$ based on the empirical estimates $\lambda_i(\hat C_1\hat C_2)$ ($\hat C_a=\frac1{n_a}\sum_{i=1}^{n_a} x_i^{(a)}x_i^{(a){{\sf T}}}$), when the size $p$ and number $n_a$ of the (zero mean) samples $x_i^{(a)}$ are similar.
no code implementations • 7 Feb 2019 • Malik Tiomoko, Florent Bouchard, Guillaume Ginholac, Romain Couillet
Relying on recent advances in statistical estimation of covariance distances based on random matrix theory, this article proposes an improved covariance and precision matrix estimation for a wide family of metrics.
no code implementations • 10 Oct 2018 • Romain Couillet, Malik Tiomoko, Steeve Zozor, Eric Moisan
Given two sets $x_1^{(1)},\ldots, x_{n_1}^{(1)}$ and $x_1^{(2)},\ldots, x_{n_2}^{(2)}\in\mathbb{R}^p$ (or $\mathbb{C}^p$) of random vectors with zero mean and positive definite covariance matrices $C_1$ and $C_2\in\mathbb{R}^{p\times p}$ (or $\mathbb{C}^{p\times p}$), respectively, this article provides novel estimators for a wide range of distances between $C_1$ and $C_2$ (along with divergences between some zero mean and covariance $C_1$ or $C_2$ probability measures) of the form $\frac1p\sum_{i=1}^n f(\lambda_i(C_1^{-1}C_2))$ (with $\lambda_i(X)$ the eigenvalues of matrix $X$).