no code implementations • 24 Aug 2023 • Puning Zhao, Fei Yu, Zhiguo Wan
Federated learning systems are susceptible to adversarial attacks.
no code implementations • 3 Aug 2023 • Puning Zhao, Lifeng Lai
A modification of the original $Q$ learning method was proposed in (Shah and Xie, 2018), which estimates $Q$ values with nearest neighbors.
no code implementations • 25 Jul 2023 • Puning Zhao, Zhiguo Wan
In this paper, we design a new method that is suitable for high dimensional problems, under arbitrary number of Byzantine attackers.
no code implementations • 26 May 2023 • Puning Zhao, Zhiguo Wan
The final estimate is nearly minimax optimal for arbitrary $q$, up to a $\ln N$ factor.
no code implementations • 30 Mar 2021 • Puning Zhao, Lifeng Lai
In this paper, we analyze the continuous armed bandit problems for nonconvex cost functions under certain smoothness and sublevel set assumptions.
no code implementations • 30 Sep 2020 • Puning Zhao, Lifeng Lai
We show that kNN density estimation is minimax optimal under both $\ell_1$ and $\ell_\infty$ criteria, if the support set is known.
no code implementations • 26 Feb 2020 • Puning Zhao, Lifeng Lai
Estimating Kullback-Leibler divergence from identical and independently distributed samples is an important problem in various domains.
no code implementations • 22 Oct 2019 • Puning Zhao, Lifeng Lai
For both classification and regression problems, existing works have shown that, if the distribution of the feature vector has bounded support and the probability density function is bounded away from zero in its support, the convergence rate of the standard kNN method, in which k is the same for all test samples, is minimax optimal.
no code implementations • 27 Oct 2018 • Puning Zhao, Lifeng Lai
Existing work has analyzed the convergence rate of this estimator for random variables whose densities are bounded away from zero in its support.