Search Results for author: Janardhan Kulkarni

Found 25 papers, 9 papers with code

Privately Aligning Language Models with Reinforcement Learning

no code implementations25 Oct 2023 Fan Wu, Huseyin A. Inan, Arturs Backurs, Varun Chandrasekaran, Janardhan Kulkarni, Robert Sim

Positioned between pre-training and user deployment, aligning large language models (LLMs) through reinforcement learning (RL) has emerged as a prevailing strategy for training instruction following-models such as ChatGPT.

Instruction Following Privacy Preserving +3

Assessing Privacy Risks in Language Models: A Case Study on Summarization Tasks

no code implementations20 Oct 2023 Ruixiang Tang, Gord Lueck, Rodolfo Quispe, Huseyin A Inan, Janardhan Kulkarni, Xia Hu

Large language models have revolutionized the field of NLP by achieving state-of-the-art performance on various tasks.

text similarity

Differentially Private Synthetic Data via Foundation Model APIs 1: Images

1 code implementation24 May 2023 Zinan Lin, Sivakanth Gopi, Janardhan Kulkarni, Harsha Nori, Sergey Yekhanin

We further demonstrate the promise of applying PE on large foundation models such as Stable Diffusion to tackle challenging private datasets with a small number of high-resolution images.

Selective Pre-training for Private Fine-tuning

1 code implementation23 May 2023 Da Yu, Sivakanth Gopi, Janardhan Kulkarni, Zinan Lin, Saurabh Naik, Tomasz Lukasz Religa, Jian Yin, Huishuai Zhang

How should we pre-train a fixed-size model $M$ on $D_\text{pub}$ and fine-tune it on $D_\text{priv}$ such that performance of $M$ with respect to $T$ is maximized and $M$ satisfies differential privacy with respect to $D_\text{priv}$?

Model Compression Transfer Learning

Exploring the Limits of Differentially Private Deep Learning with Group-wise Clipping

no code implementations3 Dec 2022 Jiyan He, Xuechen Li, Da Yu, Huishuai Zhang, Janardhan Kulkarni, Yin Tat Lee, Arturs Backurs, Nenghai Yu, Jiang Bian

To reduce the compute time overhead of private learning, we show that \emph{per-layer clipping}, where the gradient of each neural network layer is clipped separately, allows clipping to be performed in conjunction with backpropagation in differentially private optimization.

Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent

1 code implementation6 Jun 2022 Da Yu, Gautam Kamath, Janardhan Kulkarni, Tie-Yan Liu, Jian Yin, Huishuai Zhang

Differentially private stochastic gradient descent (DP-SGD) is the workhorse algorithm for recent advances in private deep learning.

Differentially Private Model Compression

no code implementations3 Jun 2022 FatemehSadat Mireshghallah, Arturs Backurs, Huseyin A Inan, Lukas Wutschitz, Janardhan Kulkarni

Recent papers have shown that large pre-trained language models (LLMs) such as BERT, GPT-2 can be fine-tuned on private data to achieve performance comparable to non-private models for many downstream Natural Language Processing (NLP) tasks while simultaneously guaranteeing differential privacy.

Model Compression

Private Non-smooth ERM and SCO in Subquadratic Steps

no code implementations NeurIPS 2021 Janardhan Kulkarni, Yin Tat Lee, Daogao Liu

We study the differentially private Empirical Risk Minimization (ERM) and Stochastic Convex Optimization (SCO) problems for non-smooth convex functions.

Differentially Private Fine-tuning of Language Models

2 code implementations ICLR 2022 Da Yu, Saurabh Naik, Arturs Backurs, Sivakanth Gopi, Huseyin A. Inan, Gautam Kamath, Janardhan Kulkarni, Yin Tat Lee, Andre Manoel, Lukas Wutschitz, Sergey Yekhanin, Huishuai Zhang

For example, on the MNLI dataset we achieve an accuracy of $87. 8\%$ using RoBERTa-Large and $83. 5\%$ using RoBERTa-Base with a privacy budget of $\epsilon = 6. 7$.

Text Generation

Synergy: Resource Sensitive DNN Scheduling in Multi-Tenant Clusters

no code implementations12 Oct 2021 Jayashree Mohan, Amar Phanishayee, Janardhan Kulkarni, Vijay Chidambaram

Unfortunately, these schedulers do not consider the impact of a job's sensitivity to allocation of CPU, memory, and storage resources.


Accuracy, Interpretability, and Differential Privacy via Explainable Boosting

1 code implementation17 Jun 2021 Harsha Nori, Rich Caruana, Zhiqi Bu, Judy Hanwen Shen, Janardhan Kulkarni

We show that adding differential privacy to Explainable Boosting Machines (EBMs), a recent method for training interpretable ML models, yields state-of-the-art accuracy while protecting privacy.


Private Non-smooth Empirical Risk Minimization and Stochastic Convex Optimization in Subquadratic Steps

no code implementations29 Mar 2021 Janardhan Kulkarni, Yin Tat Lee, Daogao Liu

More precisely, our differentially private algorithm requires $O(\frac{N^{3/2}}{d^{1/8}}+ \frac{N^2}{d})$ gradient queries for optimal excess empirical risk, which is achieved with the help of subsampling and smoothing the function via convolution.

Differentially Private Correlation Clustering

no code implementations17 Feb 2021 Mark Bun, Marek Eliáš, Janardhan Kulkarni

Correlation clustering is a widely used technique in unsupervised machine learning.

BIG-bench Machine Learning Clustering

Fast and Memory Efficient Differentially Private-SGD via JL Projections

no code implementations NeurIPS 2021 Zhiqi Bu, Sivakanth Gopi, Janardhan Kulkarni, Yin Tat Lee, Judy Hanwen Shen, Uthaipon Tantipongpipat

Unlike previous attempts to make DP-SGD faster which work only on a subset of network architectures or use compiler techniques, we propose an algorithmic solution which works for any network in a black-box manner which is the main contribution of this paper.


no code implementations1 Jan 2021 Zhiqi Bu, Sivakanth Gopi, Janardhan Kulkarni, Yin Tat Lee, Uthaipon Tantipongpipat

Differentially Private-SGD (DP-SGD) of Abadi et al. (2016) and its variations are the only known algorithms for private training of large scale neural networks.

Consistent $k$-Median: Simpler, Better and Robust

1 code implementation13 Aug 2020 Xiangyu Guo, Janardhan Kulkarni, Shi Li, Jiayi Xian

In this paper we introduce and study the online consistent $k$-clustering with outliers problem, generalizing the non-outlier version of the problem studied in [Lattanzi-Vassilvitskii, ICML17].


Differentially Private Set Union

1 code implementation ICML 2020 Sivakanth Gopi, Pankaj Gulhane, Janardhan Kulkarni, Judy Hanwen Shen, Milad Shokouhi, Sergey Yekhanin

Known algorithms for this problem proceed by collecting a subset of items from each user, taking the union of such subsets, and disclosing the items whose noisy counts fall above a certain threshold.

Locally Private Hypothesis Selection

no code implementations21 Feb 2020 Sivakanth Gopi, Gautam Kamath, Janardhan Kulkarni, Aleksandar Nikolov, Zhiwei Steven Wu, Huanyu Zhang

Absent privacy constraints, this problem requires $O(\log k)$ samples from $p$, and it was recently shown that the same complexity is achievable under (central) differential privacy.

Two-sample testing

Privately Learning Markov Random Fields

no code implementations ICML 2020 Huanyu Zhang, Gautam Kamath, Janardhan Kulkarni, Zhiwei Steven Wu

We consider the problem of learning Markov Random Fields (including the prototypical example, the Ising model) under the constraint of differential privacy.

Locally Private Gaussian Estimation

no code implementations NeurIPS 2019 Matthew Joseph, Janardhan Kulkarni, Jieming Mao, Zhiwei Steven Wu

We study a basic private estimation problem: each of $n$ users draws a single i. i. d.

Collecting Telemetry Data Privately

no code implementations NeurIPS 2017 Bolin Ding, Janardhan Kulkarni, Sergey Yekhanin

In particular, existing LDP algorithms are not suitable for repeated collection of counter data such as daily app usage statistics.

Truth and Regret in Online Scheduling

no code implementations1 Mar 2017 Shuchi Chawla, Nikhil Devanur, Janardhan Kulkarni, Rad Niazadeh

The service provider's goal is to implement a truthful online mechanism for scheduling jobs so as to maximize the social welfare of the schedule.


Cannot find the paper you are looking for? You can Submit a new open access paper.