no code implementations • 27 Jan 2024 • Minoh Jeong, Martina Cardone, Alex Dytso
However, it is challenging to determine whether such methods have achieved the optimal performance.
no code implementations • 17 Sep 2023 • Leighton P. Barnes, Alex Dytso, Jingbo Liu, H. Vincent Poor
Consider the problem of estimating a random variable $X$ from noisy observations $Y = X+ Z$, where $Z$ is standard normal, under the $L^1$ fidelity criterion.
no code implementations • 12 May 2022 • Ian Zieder, Alex Dytso, Martina Cardone
Moreover, the lower bound is shown to be tight in the high-noise regime for the Gaussian noise setting under the assumption that $\mathbf{X}$ is sub-Gaussian.
no code implementations • 10 May 2022 • Martina Cardone, Alex Dytso, Cynthia Rush
It is well known that central order statistics exhibit a central limit behavior and converge to a Gaussian distribution as the sample size grows.
no code implementations • 23 Feb 2022 • Alex Dytso, Mario Goldenbaum, H. Vincent Poor, Shlomo Shamai
A common way of characterizing minimax estimators in point estimation is by moving the problem into the Bayesian estimation domain and finding a least favorable prior distribution.
no code implementations • 4 Feb 2022 • L. P. Barnes, Alex Dytso, H. V. Poor
We consider information-theoretic bounds on expected generalization error for statistical learning problems in a networked setting.
no code implementations • 3 May 2021 • Luc Devroye, Alex Dytso
In particular, under the assumptions that the probability measure $\mu$ of the observation is atomic, and the map from $f$ to $\mu$ is bijective, it is shown that there exists an estimator $f_n$ such that for every density $f$ $\lim_{n\to \infty} \mathbb{E} \left[ \int |f_n -f | \right]=0$.
no code implementations • 5 Apr 2021 • Alex Dytso, H. Vincent Poor, Shlomo Shamai
In the second part of the paper, via various choices of ${\bf U}$, the new identity is used to generalize many of the known identities and derive some new ones.
no code implementations • 7 May 2020 • Wei Cao, Alex Dytso, Michael Fauß, H. Vincent Poor, Gang Feng
First, an estimator proposed by Bhattacharya is revisited and improved convergence rates are derived.
no code implementations • 5 May 2020 • Semih Yagli, Alex Dytso, H. Vincent Poor
Second is the distributed setting in which each device trains its own model and send its model parameters to a central server where these model parameters are aggregated to create one final model.
no code implementations • 19 Mar 2020 • Alex Dytso, Michael Fauss, H. Vincent Poor
The first result shows that the only distribution that induces the linearity of the conditional mean estimator is a product gamma distribution.
no code implementations • 10 Jan 2019 • Alex Dytso, Semih Yagli, H. Vincent Poor, Shlomo Shamai
Finally, the third part provides bounds on the number of points for the case of $n=1$ with an additional power constraint.
Information Theory Information Theory
1 code implementation • 23 Apr 2018 • Alex Dytso, H. Vincent Poor, Shlomo Shamai
This paper characterizes the necessary and sufficient conditions on the constraint $R$ such that the input distribution supported on a single sphere is optimal.
Information Theory Information Theory
1 code implementation • 26 Feb 2018 • Thee Chanyaswad, Alex Dytso, H. Vincent Poor, Prateek Mittal
noise to each element of the matrix, this method is often sub-optimal as it forfeits an opportunity to exploit the structural characteristics typically associated with matrix analysis.
no code implementations • 2 Jan 2018 • Thee Chanyaswad, Alex Dytso, H. Vincent Poor, Prateek Mittal
To address this challenge, we propose a novel differential privacy mechanism called the Matrix-Variate Gaussian (MVG) mechanism, which adds a matrix-valued noise drawn from a matrix-variate Gaussian distribution, and we rigorously prove that the MVG mechanism preserves $(\epsilon,\delta)$-differential privacy.