Search Results for author: Alex Dytso

Found 15 papers, 2 papers with code

$L^1$ Estimation: On the Optimality of Linear Estimators

no code implementations17 Sep 2023 Leighton P. Barnes, Alex Dytso, Jingbo Liu, H. Vincent Poor

Consider the problem of estimating a random variable $X$ from noisy observations $Y = X+ Z$, where $Z$ is standard normal, under the $L^1$ fidelity criterion.

An MMSE Lower Bound via Poincaré Inequality

no code implementations12 May 2022 Ian Zieder, Alex Dytso, Martina Cardone

Moreover, the lower bound is shown to be tight in the high-noise regime for the Gaussian noise setting under the assumption that $\mathbf{X}$ is sub-Gaussian.

Entropic CLT for Order Statistics

no code implementations10 May 2022 Martina Cardone, Alex Dytso, Cynthia Rush

It is well known that central order statistics exhibit a central limit behavior and converge to a Gaussian distribution as the sample size grows.

A Dimensionality Reduction Method for Finding Least Favorable Priors with a Focus on Bregman Divergence

no code implementations23 Feb 2022 Alex Dytso, Mario Goldenbaum, H. Vincent Poor, Shlomo Shamai

A common way of characterizing minimax estimators in point estimation is by moving the problem into the Bayesian estimation domain and finding a least favorable prior distribution.

Dimensionality Reduction

Improved Information Theoretic Generalization Bounds for Distributed and Federated Learning

no code implementations4 Feb 2022 L. P. Barnes, Alex Dytso, H. V. Poor

We consider information-theoretic bounds on expected generalization error for statistical learning problems in a networked setting.

Federated Learning Generalization Bounds

Consistent Density Estimation Under Discrete Mixture Models

no code implementations3 May 2021 Luc Devroye, Alex Dytso

In particular, under the assumptions that the probability measure $\mu$ of the observation is atomic, and the map from $f$ to $\mu$ is bijective, it is shown that there exists an estimator $f_n$ such that for every density $f$ $\lim_{n\to \infty} \mathbb{E} \left[ \int |f_n -f | \right]=0$.

Density Estimation

A General Derivative Identity for the Conditional Mean Estimator in Gaussian Noise and Some Applications

no code implementations5 Apr 2021 Alex Dytso, H. Vincent Poor, Shlomo Shamai

In the second part of the paper, via various choices of ${\bf U}$, the new identity is used to generalize many of the known identities and derive some new ones.

Nonparametric Estimation of the Fisher Information and Its Applications

no code implementations7 May 2020 Wei Cao, Alex Dytso, Michael Fauß, H. Vincent Poor, Gang Feng

First, an estimator proposed by Bhattacharya is revisited and improved convergence rates are derived.

Information-Theoretic Bounds on the Generalization Error and Privacy Leakage in Federated Learning

no code implementations5 May 2020 Semih Yagli, Alex Dytso, H. Vincent Poor

Second is the distributed setting in which each device trains its own model and send its model parameters to a central server where these model parameters are aggregated to create one final model.

BIG-bench Machine Learning Federated Learning

The Vector Poisson Channel: On the Linearity of the Conditional Mean Estimator

no code implementations19 Mar 2020 Alex Dytso, Michael Fauss, H. Vincent Poor

The first result shows that the only distribution that induces the linearity of the conditional mean estimator is a product gamma distribution.

The Capacity Achieving Distribution for the Amplitude Constrained Additive Gaussian Channel: An Upper Bound on the Number of Mass Points

no code implementations10 Jan 2019 Alex Dytso, Semih Yagli, H. Vincent Poor, Shlomo Shamai

Finally, the third part provides bounds on the number of points for the case of $n=1$ with an additional power constraint.

Information Theory Information Theory

On the Capacity of the Peak Power Constrained Vector Gaussian Channel: An Estimation Theoretic Perspective

1 code implementation23 Apr 2018 Alex Dytso, H. Vincent Poor, Shlomo Shamai

This paper characterizes the necessary and sufficient conditions on the constraint $R$ such that the input distribution supported on a single sphere is optimal.

Information Theory Information Theory

A Differential Privacy Mechanism Design Under Matrix-Valued Query

1 code implementation26 Feb 2018 Thee Chanyaswad, Alex Dytso, H. Vincent Poor, Prateek Mittal

noise to each element of the matrix, this method is often sub-optimal as it forfeits an opportunity to exploit the structural characteristics typically associated with matrix analysis.

MVG Mechanism: Differential Privacy under Matrix-Valued Query

no code implementations2 Jan 2018 Thee Chanyaswad, Alex Dytso, H. Vincent Poor, Prateek Mittal

To address this challenge, we propose a novel differential privacy mechanism called the Matrix-Variate Gaussian (MVG) mechanism, which adds a matrix-valued noise drawn from a matrix-variate Gaussian distribution, and we rigorously prove that the MVG mechanism preserves $(\epsilon,\delta)$-differential privacy.

Cannot find the paper you are looking for? You can Submit a new open access paper.