Search Results for author: Da Yu

Found 16 papers, 10 papers with code

Differentially Private Synthetic Data via Foundation Model APIs 2: Text

1 code implementation4 Mar 2024 Chulin Xie, Zinan Lin, Arturs Backurs, Sivakanth Gopi, Da Yu, Huseyin A Inan, Harsha Nori, Haotian Jiang, Huishuai Zhang, Yin Tat Lee, Bo Li, Sergey Yekhanin

Lin et al. (2024) recently introduced the Private Evolution (PE) algorithm to generate DP synthetic images with only API access to diffusion models.

Privacy Preserving

Privacy-Preserving Instructions for Aligning Large Language Models

no code implementations21 Feb 2024 Da Yu, Peter Kairouz, Sewoong Oh, Zheng Xu

Service providers of large language model (LLM) applications collect user instructions in the wild and use them in further aligning LLMs with users' intentions.

Language Modelling Large Language Model +1

Selective Pre-training for Private Fine-tuning

1 code implementation23 May 2023 Da Yu, Sivakanth Gopi, Janardhan Kulkarni, Zinan Lin, Saurabh Naik, Tomasz Lukasz Religa, Jian Yin, Huishuai Zhang

Besides performance improvements, our framework also shows that with careful pre-training and private fine-tuning, smaller models can match the performance of much larger models that do not have access to private data, highlighting the promise of private learning as a tool for model compression and efficiency.

Model Compression Transfer Learning

Exploring the Limits of Differentially Private Deep Learning with Group-wise Clipping

no code implementations3 Dec 2022 Jiyan He, Xuechen Li, Da Yu, Huishuai Zhang, Janardhan Kulkarni, Yin Tat Lee, Arturs Backurs, Nenghai Yu, Jiang Bian

To reduce the compute time overhead of private learning, we show that \emph{per-layer clipping}, where the gradient of each neural network layer is clipped separately, allows clipping to be performed in conjunction with backpropagation in differentially private optimization.

Computational Efficiency

Adversarial Noises Are Linearly Separable for (Nearly) Random Neural Networks

no code implementations9 Jun 2022 Huishuai Zhang, Da Yu, Yiping Lu, Di He

Adversarial examples, which are usually generated for specific inputs with a specific model, are ubiquitous for neural networks.

Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent

1 code implementation6 Jun 2022 Da Yu, Gautam Kamath, Janardhan Kulkarni, Tie-Yan Liu, Jian Yin, Huishuai Zhang

Differentially private stochastic gradient descent (DP-SGD) is the workhorse algorithm for recent advances in private deep learning.

Availability Attacks Create Shortcuts

1 code implementation1 Nov 2021 Da Yu, Huishuai Zhang, Wei Chen, Jian Yin, Tie-Yan Liu

We are the first to unveil an important population property of the perturbations of these attacks: they are almost \textbf{linearly separable} when assigned with the target labels of the corresponding samples, which hence can work as \emph{shortcuts} for the learning objective.

Data Poisoning

Differentially Private Fine-tuning of Language Models

2 code implementations ICLR 2022 Da Yu, Saurabh Naik, Arturs Backurs, Sivakanth Gopi, Huseyin A. Inan, Gautam Kamath, Janardhan Kulkarni, Yin Tat Lee, Andre Manoel, Lukas Wutschitz, Sergey Yekhanin, Huishuai Zhang

For example, on the MNLI dataset we achieve an accuracy of $87. 8\%$ using RoBERTa-Large and $83. 5\%$ using RoBERTa-Base with a privacy budget of $\epsilon = 6. 7$.

Text Generation

Large Scale Private Learning via Low-rank Reparametrization

1 code implementation17 Jun 2021 Da Yu, Huishuai Zhang, Wei Chen, Jian Yin, Tie-Yan Liu

We propose a reparametrization scheme to address the challenges of applying differentially private SGD on large neural networks, which are 1) the huge memory cost of storing individual gradients, 2) the added noise suffering notorious dimensional dependence.

Do Not Let Privacy Overbill Utility: Gradient Embedding Perturbation for Private Learning

2 code implementations ICLR 2021 Da Yu, Huishuai Zhang, Wei Chen, Tie-Yan Liu

The privacy leakage of the model about the training data can be bounded in the differential privacy mechanism.

On the Stability of Multi-branch Network

no code implementations1 Jan 2021 Huishuai Zhang, Da Yu, Wei Chen, Tie-Yan Liu

More importantly, we propose a new design ``STAM aggregation" that can guarantee to STAbilize the forward/backward process of Multi-branch networks irrespective of the number of branches.

How Does Data Augmentation Affect Privacy in Machine Learning?

1 code implementation21 Jul 2020 Da Yu, Huishuai Zhang, Wei Chen, Jian Yin, Tie-Yan Liu

Even further, we show that the proposed approach can achieve higher MI attack success rates on models trained with some data augmentation than the existing methods on models trained without data augmentation.

BIG-bench Machine Learning Data Augmentation

Gradient Perturbation is Underrated for Differentially Private Convex Optimization

no code implementations26 Nov 2019 Da Yu, Huishuai Zhang, Wei Chen, Tie-Yan Liu, Jian Yin

By using the \emph{expected curvature}, we show that gradient perturbation can achieve a significantly improved utility guarantee that can theoretically justify the advantage of gradient perturbation over other perturbation methods.

STABILITY AND CONVERGENCE THEORY FOR LEARNING RESNET: A FULL CHARACTERIZATION

no code implementations25 Sep 2019 Huishuai Zhang, Da Yu, Mingyang Yi, Wei Chen, Tie-Yan Liu

We show that for standard initialization used in practice, $\tau =1/\Omega(\sqrt{L})$ is a sharp value in characterizing the stability of forward/backward process of ResNet, where $L$ is the number of residual blocks.

Stabilize Deep ResNet with A Sharp Scaling Factor $τ$

1 code implementation17 Mar 2019 Huishuai Zhang, Da Yu, Mingyang Yi, Wei Chen, Tie-Yan Liu

Moreover, for ResNets with normalization layer, adding such a factor $\tau$ also stabilizes the training and obtains significant performance gain for deep ResNet.

Cannot find the paper you are looking for? You can Submit a new open access paper.