no code implementations • 11 Feb 2024 • Karan Chadha, John Duchi, Rohith Kuditipudi

We consider the task of constructing confidence intervals with differential privacy.

no code implementations • 21 Jul 2023 • Karan Chadha, Junye Chen, John Duchi, Vitaly Feldman, Hanieh Hashemi, Omid Javidbakht, Audra McMillan, Kunal Talwar

In this work, we study practical heuristics to improve the performance of prefix-tree based algorithms for differentially private heavy hitter detection.

no code implementations • 17 Jan 2023 • John Duchi, Saminul Haque, Rohith Kuditipudi

We design an $(\varepsilon, \delta)$-differentially private algorithm to estimate the mean of a $d$-variate distribution, with unknown covariance $\Sigma$, that is adaptive to $\Sigma$.

no code implementations • 31 Oct 2022 • Hilal Asi, Karan Chadha, Gary Cheng, John Duchi

In non-private stochastic convex optimization, stochastic gradient methods converge much faster on interpolation problems -- problems where there exists a solution that simultaneously minimizes all of the sample losses -- than on non-interpolating ones; we show that generally similar improvements are impossible in the private setting.

no code implementations • 24 Oct 2022 • John Duchi, Vitaly Feldman, Lunjia Hu, Kunal Talwar

Our goal is to recover the linear subspace shared by $\mu_1,\ldots,\mu_n$ using the data points from all users, where every data point from user $i$ is formed by adding an independent mean-zero noise vector to $\mu_i$.

no code implementations • 24 Jun 2022 • Chen Cheng, Hilal Asi, John Duchi

The construction of most supervised learning datasets revolves around collecting multiple labels for each instance, then aggregating the labels to form a type of "gold-standard".

no code implementations • 15 Jun 2022 • Maxime Cauchois, John Duchi

The cost and scarcity of fully supervised labels in statistical machine learning encourage using partially labeled data for model validation as a cheaper and more accessible alternative.

no code implementations • 20 Feb 2022 • Chen Cheng, John Duchi, Rohith Kuditipudi

We examine the necessity of interpolation in overparameterized models, that is, when achieving optimal predictive risk in machine learning problems requires (nearly) interpolating the training data.

no code implementations • 20 Jan 2022 • Maxime Cauchois, Suyash Gupta, Alnur Ali, John Duchi

The expense of acquiring labels in large-scale statistical machine learning makes partially and weakly-labeled data attractive, though it is not always apparent how to leverage such data for model fitting or validation.

no code implementations • 16 Aug 2021 • Gary Cheng, Karan Chadha, John Duchi

We propose an asymptotic framework to analyze the performance of (personalized) federated learning algorithms.

no code implementations • NeurIPS 2021 • Hilal Asi, Daniel Levy, John Duchi

We develop algorithms for private stochastic convex optimization that adapt to the hardness of the specific function we wish to optimize.

no code implementations • 25 Jun 2021 • Hilal Asi, John Duchi, Alireza Fallah, Omid Javidbakht, Kunal Talwar

We study adaptive methods for differentially private convex optimization, proposing and analyzing differentially private variants of a Stochastic Gradient Descent (SGD) algorithm with adaptive stepsizes, as well as the AdaGrad algorithm.

no code implementations • 13 Jan 2021 • Annie Marsden, John Duchi, Gregory Valiant

We study probabilistic prediction games when the underlying model is misspecified, investigating the consequences of predicting using an incorrect parametric model.

no code implementations • NeurIPS 2020 • Aman Sinha, Matthew O'Kelly, Russ Tedrake, John Duchi

Learning-based methodologies increasingly find applications in safety-critical domains like autonomous driving and medical robotics.

1 code implementation • 28 Jul 2020 • John Duchi, Tatsunori Hashimoto, Hongseok Namkoong

While modern large-scale datasets often consist of heterogeneous subpopulations -- for example, multiple demographic groups or multiple text corpora -- the standard practice of minimizing average loss fails to guarantee uniformly low losses across all subpopulations.

no code implementations • 21 Apr 2020 • Maxime Cauchois, Suyash Gupta, John Duchi

We develop conformal prediction methods for constructing valid predictive confidence sets in multiclass and multilabel problems without assumptions on the data generating distribution.

1 code implementation • ICML 2020 • Aman Sinha, Matthew O'Kelly, Hongrui Zheng, Rahul Mangharam, John Duchi, Russ Tedrake

Balancing performance and safety is crucial to deploying autonomous vehicles in multi-agent environments.

1 code implementation • ICML 2020 • Aditi Raghunathan, Sang Michael Xie, Fanny Yang, John Duchi, Percy Liang

In this work, we precisely characterize the effect of augmentation on the standard error in linear regression when the optimal linear predictor has zero standard and robust error.

no code implementations • 5 Dec 2019 • Hilal Asi, John Duchi, Omid Javidbakht

Differential Privacy (DP) provides strong guarantees on the risk of compromising a user's data in statistical learning applications, though these strong protections make learning challenging and may be too stringent for some use cases.

no code implementations • 1 Mar 2019 • Yu Bai, John Duchi, Song Mei

We study a family of (potentially non-convex) constrained optimization problems with convex composite structure.

no code implementations • 3 Dec 2018 • Abhishek Bhowmick, John Duchi, Julien Freudiger, Gaurav Kapoor, Ryan Rogers

In large-scale statistical learning, data collection and model fitting are moving increasingly toward peripheral devices---phones, watches, fitness trackers---away from centralized data collection.

1 code implementation • NeurIPS 2018 • Matthew O'Kelly, Aman Sinha, Hongseok Namkoong, John Duchi, Russ Tedrake

While recent developments in autonomous vehicle (AV) technology highlight substantial progress, we lack tools for rigorous and scalable testing.

no code implementations • 20 Oct 2018 • John Duchi, Hongseok Namkoong

A common goal in statistics and machine learning is to learn models that can perform well against distributional shifts, such as latent heterogeneous subpopulations, unknown covariate shifts, or unmodeled temporal effects.

2 code implementations • NeurIPS 2018 • Riccardo Volpi, Hongseok Namkoong, Ozan Sener, John Duchi, Vittorio Murino, Silvio Savarese

Only using training data from a single source distribution, we propose an iterative procedure that augments the dataset with examples from a fictitious target domain that is "hard" under the current model.

1 code implementation • ICLR 2018 • Aman Sinha, Hongseok Namkoong, Riccardo Volpi, John Duchi

Neural networks are vulnerable to adversarial examples and researchers have proposed many heuristic attack and defense mechanisms.

no code implementations • 16 Dec 2016 • John Duchi, Feng Ruan

We study local complexity measures for stochastic convex optimization problems, providing a local minimax theory analogous to that of H\'{a}jek and Le Cam for classical statistical problems.

no code implementations • 11 Oct 2016 • John Duchi, Peter Glynn, Hongseok Namkoong

We study statistical inference and distributionally robust solution methods for stochastic optimization problems, focusing on confidence intervals for optimal values and solutions that achieve exact coverage asymptotically.

1 code implementation • NeurIPS 2017 • John Duchi, Hongseok Namkoong

We develop an approach to risk minimization and stochastic optimization that provides a convex surrogate for variance, allowing near-optimal and computationally efficient trading between approximation and estimation error.

1 code implementation • 10 Aug 2016 • Aditi Raghunathan, Roy Frostig, John Duchi, Percy Liang

In structured prediction problems where we have indirect supervision of the output, maximum marginal likelihood faces two computational obstacles: non-convexity of the objective and intractability of even a single gradient computation.

no code implementations • NeurIPS 2016 • Yuancheng Zhu, Sabyasachi Chatterjee, John Duchi, John Lafferty

The bounds are expressed in terms of a localized and computational analogue of the modulus of continuity that is central to statistical minimax analysis.

no code implementations • NeurIPS 2013 • John Duchi, Michael. I. Jordan, Brendan Mcmahan

We study stochastic optimization problems when the \emph{data} is sparse, which is in a sense dual to the current understanding of high-dimensional statistical learning and optimization.

no code implementations • NeurIPS 2013 • John Duchi, Martin J. Wainwright, Michael. I. Jordan

We provide a detailed study of the estimation of probability distributions---discrete and continuous---in a stringent setting in which data is kept private even from the statistician.

no code implementations • NeurIPS 2013 • Yuchen Zhang, John Duchi, Michael. I. Jordan, Martin J. Wainwright

We establish minimax risk lower bounds for distributed statistical estimation given a budget $B$ of the total number of bits that may be communicated.

no code implementations • 12 May 2010 • John Duchi, Alekh Agarwal, Martin Wainwright

The goal of decentralized optimization over a network is to optimize a global objective formed by a sum of local (possibly nonsmooth) convex functions using only local computation and communication.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.