Search Results for author: Malik Tiomoko

Found 9 papers, 2 papers with code

Random matrix theory improved Fréchet mean of symmetric positive definite matrices

no code implementations10 May 2024 Florent Bouchard, Ammar Mian, Malik Tiomoko, Guillaume Ginolhac, Frédéric Pascal

In this study, we consider the realm of covariance matrices in machine learning, particularly focusing on computing Fr\'echet means on the manifold of symmetric positive definite matrices, commonly referred to as Karcher or geometric means.

Random Matrix Analysis to Balance between Supervised and Unsupervised Learning under the Low Density Separation Assumption

no code implementations20 Oct 2023 Vasilii Feofanov, Malik Tiomoko, Aladin Virmaux

As an application, we derive a hyperparameter selection policy that finds the best balance between the supervised and the unsupervised terms of our learning criterion.

Model Selection

PCA-based Multi Task Learning: a Random Matrix Approach

no code implementations1 Nov 2021 Malik Tiomoko, Romain Couillet, Frédéric Pascal

The article proposes and theoretically analyses a \emph{computationally efficient} multi-task learning (MTL) extension of popular principal component analysis (PCA)-based supervised learning schemes \cite{barshan2011supervised, bair2006prediction}.

Multi-Task Learning

Multi-task learning on the edge: cost-efficiency and theoretical optimality

1 code implementation9 Oct 2021 Sami Fakhry, Romain Couillet, Malik Tiomoko

This article proposes a distributed multi-task learning (MTL) algorithm based on supervised principal component analysis (SPCA) which is: (i) theoretically optimal for Gaussian mixtures, (ii) computationally cheap and scalable.

Multi-Task Learning

Deciphering and Optimizing Multi-Task Learning: a Random Matrix Approach

no code implementations ICLR 2021 Malik Tiomoko, Hafiz Tiomoko Ali, Romain Couillet

This article provides theoretical insights into the inner workings of multi-task and transfer learning methods, by studying the tractable least-square support vector machine multi-task learning (LS-SVM MTL) method, in the limit of large ($p$) and numerous ($n$) data.

Multi-Task Learning

Large Dimensional Analysis and Improvement of Multi Task Learning

no code implementations3 Sep 2020 Malik Tiomoko, Romain Couillet, Hafiz Tiomoko

Multi Task Learning (MTL) efficiently leverages useful information contained in multiple related tasks to help improve the generalization performance of all tasks.

Multi-Task Learning

Random Matrix-Improved Estimation of the Wasserstein Distance between two Centered Gaussian Distributions

1 code implementation8 Mar 2019 Malik Tiomoko, Romain Couillet

This article proposes a method to consistently estimate functionals $\frac1p\sum_{i=1}^pf(\lambda_i(C_1C_2))$ of the eigenvalues of the product of two covariance matrices $C_1, C_2\in\mathbb{R}^{p\times p}$ based on the empirical estimates $\lambda_i(\hat C_1\hat C_2)$ ($\hat C_a=\frac1{n_a}\sum_{i=1}^{n_a} x_i^{(a)}x_i^{(a){{\sf T}}}$), when the size $p$ and number $n_a$ of the (zero mean) samples $x_i^{(a)}$ are similar.

Random Matrix Improved Covariance Estimation for a Large Class of Metrics

no code implementations7 Feb 2019 Malik Tiomoko, Florent Bouchard, Guillaume Ginholac, Romain Couillet

Relying on recent advances in statistical estimation of covariance distances based on random matrix theory, this article proposes an improved covariance and precision matrix estimation for a wide family of metrics.

BIG-bench Machine Learning

Random matrix-improved estimation of covariance matrix distances

no code implementations10 Oct 2018 Romain Couillet, Malik Tiomoko, Steeve Zozor, Eric Moisan

Given two sets $x_1^{(1)},\ldots, x_{n_1}^{(1)}$ and $x_1^{(2)},\ldots, x_{n_2}^{(2)}\in\mathbb{R}^p$ (or $\mathbb{C}^p$) of random vectors with zero mean and positive definite covariance matrices $C_1$ and $C_2\in\mathbb{R}^{p\times p}$ (or $\mathbb{C}^{p\times p}$), respectively, this article provides novel estimators for a wide range of distances between $C_1$ and $C_2$ (along with divergences between some zero mean and covariance $C_1$ or $C_2$ probability measures) of the form $\frac1p\sum_{i=1}^n f(\lambda_i(C_1^{-1}C_2))$ (with $\lambda_i(X)$ the eigenvalues of matrix $X$).

Cannot find the paper you are looking for? You can Submit a new open access paper.