Search Results for author: Arnak S. Dalalyan

Found 17 papers, 0 papers with code

Penalized Langevin dynamics with vanishing penalty for smooth and log-concave targets

no code implementations NeurIPS 2020 Avetik Karagulyan, Arnak S. Dalalyan

An upper bound on the Wasserstein-2 distance between the distribution of the PLD at time $t$ and the target is established.

All-In-One Robust Estimator of the Gaussian Mean

no code implementations4 Feb 2020 Arnak S. Dalalyan, Arshak Minasyan

It is the first result of this kind in the literature and involves only the effective rank of the covariance matrix.

Statistics Theory Statistics Theory

Outlier-robust estimation of a sparse linear model using $\ell_1$-penalized Huber's $M$-estimator

no code implementations12 Apr 2019 Arnak S. Dalalyan, Philip Thompson

We study the problem of estimating a $p$-dimensional $s$-sparse vector in a linear model with Gaussian design and additive noise.

Confidence regions and minimax rates in outlier-robust estimation on the probability simplex

no code implementations12 Feb 2019 Amir-Hossein Bateni, Arnak S. Dalalyan

Assuming that the discrete variable takes $k$ values, the unknown parameter $\boldsymbol \theta$ is a $k$-dimensional vector belonging to the probability simplex.

On sampling from a log-concave density using kinetic Langevin diffusions

no code implementations24 Jul 2018 Arnak S. Dalalyan, Lionel Riou-Durand

We then use this result for obtaining improved guarantees of sampling using the kinetic Langevin Monte Carlo method, when the quality of sampling is measured by the Wasserstein distance.

Restricted eigenvalue property for corrupted Gaussian designs

no code implementations21 May 2018 Philip Thompson, Arnak S. Dalalyan

Motivated by the construction of tractable robust estimators via convex relaxations, we present conditions on the sample size which guarantee an augmented notion of Restricted Eigenvalue-type condition for Gaussian designs.

Statistics Theory Statistics Theory

User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient

no code implementations29 Sep 2017 Arnak S. Dalalyan, Avetik G. Karagulyan

We provide nonasymptotic guarantees on the sampling error of these second-order LMCs.

On the Exponentially Weighted Aggregate with the Laplace Prior

no code implementations25 Nov 2016 Arnak S. Dalalyan, Edwin Grappin, Quentin Paris

These inequalities show that if the temperature parameter is small, the EWA with the Laplace prior satisfies the same type of oracle inequality as the lasso estimator does, as long as the quality of estimation is measured by the prediction loss.

regression

On the prediction loss of the lasso in the partially labeled setting

no code implementations20 Jun 2016 Pierre C. Bellec, Arnak S. Dalalyan, Edwin Grappin, Quentin Paris

In this paper we revisit the risk bounds of the lasso estimator in the context of transductive and semi-supervised learning.

Theoretical guarantees for approximate sampling from smooth and log-concave densities

no code implementations23 Dec 2014 Arnak S. Dalalyan

Sampling from various kinds of distributions is an issue of paramount importance in statistics since it is often the key ingredient for constructing estimators, test procedures or confidence intervals.

On the Prediction Performance of the Lasso

no code implementations7 Feb 2014 Arnak S. Dalalyan, Mohamed Hebiri, Johannes Lederer

Although the Lasso has been extensively studied, the relationship between its prediction performance and the correlations of the covariates is not fully understood.

regression

Minimax rates in permutation estimation for feature matching

no code implementations17 Oct 2013 Olivier Collier, Arnak S. Dalalyan

The problem of matching two sets of features appears in various tasks of computer vision and can be often formalized as a problem of permutation estimation.

Learning Heteroscedastic Models by Convex Programming under Group Sparsity

no code implementations16 Apr 2013 Arnak S. Dalalyan, Mohamed Hebiri, Katia Méziani, Joseph Salmon

Popular sparse estimation methods based on $\ell_1$-relaxation, such as the Lasso and the Dantzig selector, require the knowledge of the variance of the noise in order to properly tune the regularization parameter.

Time Series Time Series Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.