Search Results for author: Puning Zhao

Found 21 papers, 1 papers with code

On Theoretical Limits of Learning with Label Differential Privacy

no code implementations20 Feb 2025 Puning Zhao, Chuan Ma, Li Shen, Shaowei Wang, Rongfei Fan

Our results demonstrate that under label local DP (LDP), the risk has a significantly faster convergence rate than that under full LDP, i. e. protecting both features and labels, indicating the advantages of relaxing the DP definition to focus solely on labels.

Sequential Federated Learning in Hierarchical Architecture on Non-IID Datasets

no code implementations19 Aug 2024 Xingrun Yan, Shiyuan Zuo, Rongfei Fan, Han Hu, Li Shen, Puning Zhao, Yong Luo

In a real federated learning (FL) system, communication overhead for passing model parameters between the clients and the parameter server (PS) is often a bottleneck.

Federated Learning

Differential Private Stochastic Optimization with Heavy-tailed Data: Towards Optimal Rates

no code implementations19 Aug 2024 Puning Zhao, Jiafei Wu, Zhe Liu, Chong Wang, Rongfei Fan, Qingming Li

The main obstacle is that existing gradient estimators have suboptimal tail properties, resulting in a superfluous factor of $d$ in the union bound.

Stochastic Optimization

Contextual Bandits for Unbounded Context Distributions

no code implementations19 Aug 2024 Puning Zhao, Jiafei Wu, Zhe Liu, Huiwen Wu

In this paper, we solve the nonparametric contextual bandit problem with unbounded contexts.

Decision Making Multi-Armed Bandits +1

Byzantine-resilient Federated Learning Employing Normalized Gradients on Non-IID Datasets

no code implementations18 Aug 2024 Shiyuan Zuo, Xingrun Yan, Rongfei Fan, Li Shen, Puning Zhao, Jie Xu, Han Hu

In cases where the loss function is strongly convex, the zero optimality gap achieving rate can be improved to be linear.

Federated Learning

Enhancing Learning with Label Differential Privacy by Vector Approximation

no code implementations24 May 2024 Puning Zhao, Rongfei Fan, Huiwen Wu, Qingming Li, Jiafei Wu, Zhe Liu

Label differential privacy (DP) is a framework that protects the privacy of labels in training datasets, while the feature vectors are public.

CG-FedLLM: How to Compress Gradients in Federated Fune-tuning for Large Language Models

no code implementations22 May 2024 Huiwen Wu, Xiaohan Li, Deyi Zhang, Xiaogang Xu, Jiafei Wu, Puning Zhao, Zhe Liu

The success of current Large-Language Models (LLMs) hinges on extensive training data that is collected and stored centrally, called Centralized Learning (CL).

Decoder Federated Learning

Emulating Full Client Participation: A Long-Term Client Selection Strategy for Federated Learning

no code implementations22 May 2024 Qingming Li, Juzheng Miao, Puning Zhao, Li Zhou, Shouling Ji, BoWen Zhou, Furui Liu

In this study, we propose a novel client selection strategy designed to emulate the performance achieved with full client participation.

Fairness Federated Learning

A Huber Loss Minimization Approach to Mean Estimation under User-level Differential Privacy

no code implementations22 May 2024 Puning Zhao, Lifeng Lai, Li Shen, Qingming Li, Jiafei Wu, Zhe Liu

We provide a theoretical analysis of our approach, which gives the noise strength needed for privacy protection, as well as the bound of mean squared error.

Soft Label PU Learning

no code implementations3 May 2024 Puning Zhao, Jintao Deng, Xu Cheng

In this paper, we propose soft label PU learning, in which unlabeled data are assigned soft labels according to their probabilities of being positive.

Common Sense Reasoning

Learn Suspected Anomalies from Event Prompts for Video Anomaly Detection

1 code implementation2 Mar 2024 Chenchen Tao, Xiaohao Peng, Chong Wang, Jiafei Wu, Puning Zhao, Jun Wang, Jiangbo Qian

Most models for weakly supervised video anomaly detection (WS-VAD) rely on multiple instance learning, aiming to distinguish normal and abnormal snippets without specifying the type of anomaly.

Anomaly Detection Multiple Instance Learning

Minimax Optimal Q Learning with Nearest Neighbors

no code implementations3 Aug 2023 Puning Zhao, Lifeng Lai

A recent interesting work \cite{shah2018q} solves MDP with bounded continuous state space by a nearest neighbor $Q$ learning approach, which has a sample complexity of $\tilde{O}(\frac{1}{\epsilon^{d+3}(1-\gamma)^{d+7}})$ for $\epsilon$-accurate $Q$ function estimation with discount factor $\gamma$.

Q-Learning

High Dimensional Distributed Gradient Descent with Arbitrary Number of Byzantine Attackers

no code implementations25 Jul 2023 Puning Zhao, Zhiguo Wan

In this paper, we design a new method that is suitable for high dimensional problems, under arbitrary number of Byzantine attackers.

Robust Nonparametric Regression under Poisoning Attack

no code implementations26 May 2023 Puning Zhao, Zhiguo Wan

The final estimate is nearly minimax optimal for arbitrary $q$, up to a $\ln N$ factor.

regression

Optimal Stochastic Nonconvex Optimization with Bandit Feedback

no code implementations30 Mar 2021 Puning Zhao, Lifeng Lai

In this paper, we analyze the continuous armed bandit problems for nonconvex cost functions under certain smoothness and sublevel set assumptions.

Analysis of KNN Density Estimation

no code implementations30 Sep 2020 Puning Zhao, Lifeng Lai

We show that kNN density estimation is minimax optimal under both $\ell_1$ and $\ell_\infty$ criteria, if the support set is known.

Density Estimation

Minimax Optimal Estimation of KL Divergence for Continuous Distributions

no code implementations26 Feb 2020 Puning Zhao, Lifeng Lai

Estimating Kullback-Leibler divergence from identical and independently distributed samples is an important problem in various domains.

Minimax Rate Optimal Adaptive Nearest Neighbor Classification and Regression

no code implementations22 Oct 2019 Puning Zhao, Lifeng Lai

For both classification and regression problems, existing works have shown that, if the distribution of the feature vector has bounded support and the probability density function is bounded away from zero in its support, the convergence rate of the standard kNN method, in which k is the same for all test samples, is minimax optimal.

Classification General Classification +1

Analysis of KNN Information Estimators for Smooth Distributions

no code implementations27 Oct 2018 Puning Zhao, Lifeng Lai

Existing work has analyzed the convergence rate of this estimator for random variables whose densities are bounded away from zero in its support.

Cannot find the paper you are looking for? You can Submit a new open access paper.