no code implementations • 7 Apr 2023 • Etienne Donier-Meroz, Arnak S. Dalalyan, Francis Kramarz, Philippe Choné, Xavier D'Haultfoeuille
This is presented as a problem of estimation of a bivariate function referred to as graphon.
no code implementations • 5 Apr 2022 • Amir-Hossein Bateni, Arshak Minasyan, Arnak S. Dalalyan
We study the problem of robust estimation of the mean vector of a sub-Gaussian distribution.
no code implementations • NeurIPS 2020 • Avetik Karagulyan, Arnak S. Dalalyan
An upper bound on the Wasserstein-2 distance between the distribution of the PLD at time $t$ and the target is established.
no code implementations • 4 Feb 2020 • Arnak S. Dalalyan, Arshak Minasyan
It is the first result of this kind in the literature and involves only the effective rank of the covariance matrix.
Statistics Theory Statistics Theory
no code implementations • 20 Jun 2019 • Arnak S. Dalalyan, Avetik Karagulyan, Lionel Riou-Durand
The error of sampling is measured by Wasserstein-$q$ distances.
no code implementations • 12 Apr 2019 • Arnak S. Dalalyan, Philip Thompson
We study the problem of estimating a $p$-dimensional $s$-sparse vector in a linear model with Gaussian design and additive noise.
no code implementations • 15 Mar 2019 • Victor-Emmanuel Brunel, Arnak S. Dalalyan, Nicolas Schreuder
M-estimators are ubiquitous in machine learning and statistical learning theory.
no code implementations • 12 Feb 2019 • Amir-Hossein Bateni, Arnak S. Dalalyan
Assuming that the discrete variable takes $k$ values, the unknown parameter $\boldsymbol \theta$ is a $k$-dimensional vector belonging to the probability simplex.
no code implementations • 24 Jul 2018 • Arnak S. Dalalyan, Lionel Riou-Durand
We then use this result for obtaining improved guarantees of sampling using the kinetic Langevin Monte Carlo method, when the quality of sampling is measured by the Wasserstein distance.
no code implementations • 21 May 2018 • Philip Thompson, Arnak S. Dalalyan
Motivated by the construction of tractable robust estimators via convex relaxations, we present conditions on the sample size which guarantee an augmented notion of Restricted Eigenvalue-type condition for Gaussian designs.
Statistics Theory Statistics Theory
no code implementations • 29 Sep 2017 • Arnak S. Dalalyan, Avetik G. Karagulyan
We provide nonasymptotic guarantees on the sampling error of these second-order LMCs.
no code implementations • 25 Nov 2016 • Arnak S. Dalalyan, Edwin Grappin, Quentin Paris
These inequalities show that if the temperature parameter is small, the EWA with the Laplace prior satisfies the same type of oracle inequality as the lasso estimator does, as long as the quality of estimation is measured by the prediction loss.
no code implementations • 20 Jun 2016 • Pierre C. Bellec, Arnak S. Dalalyan, Edwin Grappin, Quentin Paris
In this paper we revisit the risk bounds of the lasso estimator in the context of transductive and semi-supervised learning.
no code implementations • 23 Dec 2014 • Arnak S. Dalalyan
Sampling from various kinds of distributions is an issue of paramount importance in statistics since it is often the key ingredient for constructing estimators, test procedures or confidence intervals.
no code implementations • 7 Feb 2014 • Arnak S. Dalalyan, Mohamed Hebiri, Johannes Lederer
Although the Lasso has been extensively studied, the relationship between its prediction performance and the correlations of the covariates is not fully understood.
no code implementations • 17 Oct 2013 • Olivier Collier, Arnak S. Dalalyan
The problem of matching two sets of features appears in various tasks of computer vision and can be often formalized as a problem of permutation estimation.
no code implementations • 16 Apr 2013 • Arnak S. Dalalyan, Mohamed Hebiri, Katia Méziani, Joseph Salmon
Popular sparse estimation methods based on $\ell_1$-relaxation, such as the Lasso and the Dantzig selector, require the knowledge of the variance of the noise in order to properly tune the regularization parameter.