no code implementations • 25 Apr 2024 • Krishnamurthy Dvijotham, H. Brendan McMahan, Krishna Pillutla, Thomas Steinke, Abhradeep Thakurta
Existing algorithms for differentially private continual counting are either inefficient in terms of their space usage or add an excessive amount of noise, inducing suboptimal utility.
no code implementations • 21 Oct 2023 • Ronak Mehta, Vincent Roulet, Krishna Pillutla, Zaid Harchaoui
We consider the distributionally robust optimization (DRO) problem with spectral risk-based uncertainty set and $f$-divergence penalty.
no code implementations • 13 Oct 2023 • Nikhil Kandpal, Krishna Pillutla, Alina Oprea, Peter Kairouz, Christopher A. Choquette-Choo, Zheng Xu
Fine-tuning is a common and effective method for tailoring large language models (LLMs) to specialized tasks and applications.
no code implementations • 10 Oct 2023 • Christopher A. Choquette-Choo, Krishnamurthy Dvijotham, Krishna Pillutla, Arun Ganesh, Thomas Steinke, Abhradeep Thakurta
We characterize the asymptotic learning utility for any choice of the correlation function, giving precise analytical bounds for linear regression and as the solution to a convex program for general convex functions.
no code implementations • 18 May 2023 • Krishna Pillutla, Vincent Roulet, Sham Kakade, Zaid Harchaoui
Gauss-Newton methods and their stochastic version have been widely used in machine learning and signal processing.
1 code implementation • 30 Dec 2022 • Krishna Pillutla, Lang Liu, John Thickstun, Sean Welleck, Swabha Swayamdipta, Rowan Zellers, Sewoong Oh, Yejin Choi, Zaid Harchaoui
We present MAUVE, a family of comparison measures between pairs of distributions such as those encountered in the generative modeling of text or images.
1 code implementation • 10 Dec 2022 • Ronak Mehta, Vincent Roulet, Krishna Pillutla, Lang Liu, Zaid Harchaoui
Spectral risk objectives - also called $L$-risks - allow for learning systems to interpolate between optimizing average-case performance (as in empirical risk minimization) and worst-case performance on a task.
1 code implementation • 8 Dec 2022 • Jillian Fisher, Lang Liu, Krishna Pillutla, Yejin Choi, Zaid Harchaoui
Influence diagnostics such as influence functions and approximate maximum influence perturbations are popular in machine learning and in AI domain applications.
2 code implementations • 8 Apr 2022 • Krishna Pillutla, Kshitiz Malik, Abdelrahman Mohamed, Michael Rabbat, Maziar Sanjabi, Lin Xiao
We consider two federated learning algorithms for training partially personalized models, where the shared and personal parameters are updated either simultaneously or alternately on the devices.
1 code implementation • 17 Dec 2021 • Krishna Pillutla, Yassine Laguel, Jérôme Malick, Zaid Harchaoui
We present a federated learning framework that is designed to robustly deliver good predictive performance across individual clients with heterogeneous data.
1 code implementation • NeurIPS 2021 • Lang Liu, Krishna Pillutla, Sean Welleck, Sewoong Oh, Yejin Choi, Zaid Harchaoui
The spectacular success of deep generative models calls for quantitative tools to measure their statistical performance.
1 code implementation • NeurIPS 2021 • Aditya Kusupati, Matthew Wallingford, Vivek Ramanujan, Raghav Somani, Jae Sung Park, Krishna Pillutla, Prateek Jain, Sham Kakade, Ali Farhadi
We further quantitatively measure the quality of our codes by applying it to the efficient image retrieval as well as out-of-distribution (OOD) detection problems.
3 code implementations • NeurIPS 2021 • Krishna Pillutla, Swabha Swayamdipta, Rowan Zellers, John Thickstun, Sean Welleck, Yejin Choi, Zaid Harchaoui
As major progress is made in open-ended text generation, measuring how close machine-generated text is to human language remains a critical open problem.
1 code implementation • arXiv preprint 2020 • Yassine Laguel, Krishna Pillutla, Jérôme Malick, Zaid Harchaoui
We propose a federated learning framework to handle heterogeneous client devices which do not conform to the population data distribution.
2 code implementations • arXiv preprint 2019 • Krishna Pillutla, Sham M. Kakade, Zaid Harchaoui
We present a robust aggregation approach to make federated learning robust to settings when a fraction of the devices may be sending corrupted updates to the server.
1 code implementation • NeurIPS 2018 • Krishna Pillutla, Vincent Roulet, Sham M. Kakade, Zaid Harchaoui
We present a framework to train a structured prediction model by performing smoothing on the inference algorithm it builds upon.