no code implementations • 20 Feb 2025 • Puning Zhao, Chuan Ma, Li Shen, Shaowei Wang, Rongfei Fan
Our results demonstrate that under label local DP (LDP), the risk has a significantly faster convergence rate than that under full LDP, i. e. protecting both features and labels, indicating the advantages of relaxing the DP definition to focus solely on labels.
no code implementations • 19 Aug 2024 • Xingrun Yan, Shiyuan Zuo, Rongfei Fan, Han Hu, Li Shen, Puning Zhao, Yong Luo
In a real federated learning (FL) system, communication overhead for passing model parameters between the clients and the parameter server (PS) is often a bottleneck.
no code implementations • 19 Aug 2024 • Puning Zhao, Jiafei Wu, Zhe Liu, Chong Wang, Rongfei Fan, Qingming Li
The main obstacle is that existing gradient estimators have suboptimal tail properties, resulting in a superfluous factor of $d$ in the union bound.
no code implementations • 19 Aug 2024 • Puning Zhao, Jiafei Wu, Zhe Liu, Huiwen Wu
In this paper, we solve the nonparametric contextual bandit problem with unbounded contexts.
no code implementations • 18 Aug 2024 • Shiyuan Zuo, Xingrun Yan, Rongfei Fan, Li Shen, Puning Zhao, Jie Xu, Han Hu
In cases where the loss function is strongly convex, the zero optimality gap achieving rate can be improved to be linear.
no code implementations • 27 May 2024 • Puning Zhao, Li Shen, Rongfei Fan, Qingming Li, Huiwen Wu, Jiafei Wu, Zhe Liu
Under the central model, user-level DP is strictly stronger than the item-level one.
no code implementations • 24 May 2024 • Puning Zhao, Rongfei Fan, Huiwen Wu, Qingming Li, Jiafei Wu, Zhe Liu
Label differential privacy (DP) is a framework that protects the privacy of labels in training datasets, while the feature vectors are public.
no code implementations • 22 May 2024 • Huiwen Wu, Xiaohan Li, Deyi Zhang, Xiaogang Xu, Jiafei Wu, Puning Zhao, Zhe Liu
The success of current Large-Language Models (LLMs) hinges on extensive training data that is collected and stored centrally, called Centralized Learning (CL).
no code implementations • 22 May 2024 • Qingming Li, Juzheng Miao, Puning Zhao, Li Zhou, Shouling Ji, BoWen Zhou, Furui Liu
In this study, we propose a novel client selection strategy designed to emulate the performance achieved with full client participation.
no code implementations • 22 May 2024 • Puning Zhao, Lifeng Lai, Li Shen, Qingming Li, Jiafei Wu, Zhe Liu
We provide a theoretical analysis of our approach, which gives the noise strength needed for privacy protection, as well as the bound of mean squared error.
no code implementations • 3 May 2024 • Puning Zhao, Jintao Deng, Xu Cheng
In this paper, we propose soft label PU learning, in which unlabeled data are assigned soft labels according to their probabilities of being positive.
1 code implementation • 2 Mar 2024 • Chenchen Tao, Xiaohao Peng, Chong Wang, Jiafei Wu, Puning Zhao, Jun Wang, Jiangbo Qian
Most models for weakly supervised video anomaly detection (WS-VAD) rely on multiple instance learning, aiming to distinguish normal and abnormal snippets without specifying the type of anomaly.
no code implementations • 24 Aug 2023 • Puning Zhao, Fei Yu, Zhiguo Wan
Federated learning systems are susceptible to adversarial attacks.
no code implementations • 3 Aug 2023 • Puning Zhao, Lifeng Lai
A recent interesting work \cite{shah2018q} solves MDP with bounded continuous state space by a nearest neighbor $Q$ learning approach, which has a sample complexity of $\tilde{O}(\frac{1}{\epsilon^{d+3}(1-\gamma)^{d+7}})$ for $\epsilon$-accurate $Q$ function estimation with discount factor $\gamma$.
no code implementations • 25 Jul 2023 • Puning Zhao, Zhiguo Wan
In this paper, we design a new method that is suitable for high dimensional problems, under arbitrary number of Byzantine attackers.
no code implementations • 26 May 2023 • Puning Zhao, Zhiguo Wan
The final estimate is nearly minimax optimal for arbitrary $q$, up to a $\ln N$ factor.
no code implementations • 30 Mar 2021 • Puning Zhao, Lifeng Lai
In this paper, we analyze the continuous armed bandit problems for nonconvex cost functions under certain smoothness and sublevel set assumptions.
no code implementations • 30 Sep 2020 • Puning Zhao, Lifeng Lai
We show that kNN density estimation is minimax optimal under both $\ell_1$ and $\ell_\infty$ criteria, if the support set is known.
no code implementations • 26 Feb 2020 • Puning Zhao, Lifeng Lai
Estimating Kullback-Leibler divergence from identical and independently distributed samples is an important problem in various domains.
no code implementations • 22 Oct 2019 • Puning Zhao, Lifeng Lai
For both classification and regression problems, existing works have shown that, if the distribution of the feature vector has bounded support and the probability density function is bounded away from zero in its support, the convergence rate of the standard kNN method, in which k is the same for all test samples, is minimax optimal.
no code implementations • 27 Oct 2018 • Puning Zhao, Lifeng Lai
Existing work has analyzed the convergence rate of this estimator for random variables whose densities are bounded away from zero in its support.