Search Results for author: John C. Duchi

Found 41 papers, 8 papers with code

Efficient Learning using Forward-Backward Splitting

no code implementations NeurIPS 2009 Yoram Singer, John C. Duchi

We derive concrete and very simple algorithms for minimization of loss functions with $\ell_1$, $\ell_2$, $\ell_2^2$, and $\ell_\infty$ regularization.

Distributed Dual Averaging In Networks

no code implementations NeurIPS 2010 Alekh Agarwal, Martin J. Wainwright, John C. Duchi

The goal of decentralized optimization over a network is to optimize a global objective formed by a sum of local (possibly nonsmooth) convex functions using only local computation and communication.

Distributed Delayed Stochastic Optimization

no code implementations NeurIPS 2011 Alekh Agarwal, John C. Duchi

We analyze the convergence of gradient-based optimization algorithms whose updates depend on delayed stochastic gradient information.

Distributed Optimization

The asymptotics of ranking algorithms

no code implementations7 Apr 2012 John C. Duchi, Lester Mackey, Michael. I. Jordan

With these negative results as motivation, we present a new approach to supervised ranking based on aggregation of partial preferences, and we develop $U$-statistic-based empirical risk minimization procedures.

Comunication-Efficient Algorithms for Statistical Optimization

no code implementations19 Sep 2012 Yuchen Zhang, John C. Duchi, Martin Wainwright

We analyze two communication-efficient algorithms for distributed statistical optimization on large-scale data sets.

regression

Privacy Aware Learning

no code implementations NeurIPS 2012 John C. Duchi, Michael. I. Jordan, Martin J. Wainwright

We study statistical risk minimization problems under a privacy model in which the data is kept confidential even from the learner.

Finite Sample Convergence Rates of Zero-Order Stochastic Optimization Methods

no code implementations NeurIPS 2012 Andre Wibisono, Martin J. Wainwright, Michael. I. Jordan, John C. Duchi

We consider derivative-free algorithms for stochastic optimization problems that use only noisy function values rather than gradients, analyzing their finite-sample convergence rates.

Stochastic Optimization

Communication-Efficient Algorithms for Statistical Optimization

no code implementations NeurIPS 2012 Yuchen Zhang, Martin J. Wainwright, John C. Duchi

The first algorithm is an averaging method that distributes the $N$ data samples evenly to $m$ machines, performs separate minimization on each subset, and then averages the estimates.

Learning-To-Rank

Divide and Conquer Kernel Ridge Regression: A Distributed Algorithm with Minimax Optimal Rates

no code implementations22 May 2013 Yuchen Zhang, John C. Duchi, Martin J. Wainwright

We establish optimal convergence rates for a decomposition-based scalable approach to kernel ridge regression.

regression

Optimal rates for zero-order convex optimization: the power of two function evaluations

no code implementations7 Dec 2013 John C. Duchi, Michael. I. Jordan, Martin J. Wainwright, Andre Wibisono

We consider derivative-free algorithms for stochastic and non-stochastic convex optimization problems that use only function values rather than gradients.

Optimality guarantees for distributed statistical estimation

no code implementations5 May 2014 John C. Duchi, Michael. I. Jordan, Martin J. Wainwright, Yuchen Zhang

Large data sets often require performing distributed statistical estimation, with a full data set split across multiple machines and limited communication between machines.

Asynchronous stochastic convex optimization

1 code implementation4 Aug 2015 John C. Duchi, Sorathan Chaturapruek, Christopher Ré

We show that asymptotically, completely asynchronous stochastic gradient procedures achieve optimal (even to constant factors) convergence rates for the solution of convex optimization problems under nearly the same conditions required for asymptotic optimality of standard stochastic gradient procedures.

Stochastic Optimization

Asynchronous stochastic convex optimization: the noise is in the noise and SGD don't care

no code implementations NeurIPS 2015 Sorathan Chaturapruek, John C. Duchi, Christopher Ré

We show that asymptotically, completely asynchronous stochastic gradient procedures achieve optimal (even to constant factors) convergence rates for the solution of convex optimization problems under nearly the same conditions required for asymptotic optimality of standard stochastic gradient procedures.

Stochastic Optimization

Stochastic Gradient Methods for Distributionally Robust Optimization with f-divergences

no code implementations NeurIPS 2016 Hongseok Namkoong, John C. Duchi

We develop efficient solution methods for a robust empirical risk minimization problem designed to give calibrated confidence intervals on performance and provide optimal tradeoffs between bias and variance.

Learning Kernels with Random Features

1 code implementation NeurIPS 2016 Aman Sinha, John C. Duchi

We extend the randomized-feature approach to the task of learning a kernel (via its associated random features).

Generalization Bounds

Adaptive Sampling Probabilities for Non-Smooth Optimization

no code implementations ICML 2017 Hongseok Namkoong, Aman Sinha, Steve Yadlowsky, John C. Duchi

Standard forms of coordinate and stochastic gradient methods do not adapt to structure in data; their good behavior under random sampling is predicated on uniformity in data.

“Convex Until Proven Guilty”: Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions

no code implementations ICML 2017 Yair Carmon, John C. Duchi, Oliver Hinder, Aaron Sidford

We develop and analyze a variant of Nesterov’s accelerated gradient descent (AGD) for minimization of smooth non-convex functions.

Mean Estimation from Adaptive One-bit Measurements

no code implementations2 Aug 2017 Alon Kipnis, John C. Duchi

We study the squared error risk in this estimation as a function of the number of samples and one-bit measurements $n$.

Statistics Theory Statistics Theory

Unsupervised Transformation Learning via Convex Relaxations

1 code implementation NeurIPS 2017 Tatsunori B. Hashimoto, John C. Duchi, Percy Liang

Our goal is to extract meaningful transformations from raw images, such as varying the thickness of lines in handwriting or the lighting in a portrait.

Derivative free optimization via repeated classification

1 code implementation11 Apr 2018 Tatsunori B. Hashimoto, Steve Yadlowsky, John C. Duchi

We develop an algorithm for minimizing a function using $n$ batched function value measurements at each of $T$ rounds by using classifiers to identify a function's sublevel set.

Active Learning Classification +1

Stochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and Adaptivity

no code implementations12 Oct 2018 Hilal Asi, John C. Duchi

We develop model-based methods for solving stochastic convex optimization problems, introducing the approximate-proximal point, or aProx, family, which includes stochastic subgradient, proximal point, and bundle methods.

Analysis of Krylov Subspace Solutions of Regularized Non-Convex Quadratic Problems

no code implementations NeurIPS 2018 Yair Carmon, John C. Duchi

We provide convergence rates for Krylov subspace solutions to the trust-region and cubic-regularized (nonconvex) quadratic problems.

Mean Estimation from One-Bit Measurements

no code implementations10 Jan 2019 Alon Kipnis, John C. Duchi

We consider the problem of estimating the mean of a symmetric log-concave distribution under the constraint that only a single bit per sample from this distribution is available to the estimator.

Quantization

A Rank-1 Sketch for Matrix Multiplicative Weights

no code implementations7 Mar 2019 Yair Carmon, John C. Duchi, Aaron Sidford, Kevin Tian

We show that a simple randomized sketch of the matrix multiplicative weight (MMW) update enjoys (in expectation) the same regret bounds as MMW, up to a small constant factor.

The importance of better models in stochastic optimization

1 code implementation20 Mar 2019 Hilal Asi, John C. Duchi

Standard stochastic optimization methods are brittle, sensitive to stepsize choices and other algorithmic parameters, and they exhibit instability outside of well-behaved families of objectives.

Stochastic Optimization

Unlabeled Data Improves Adversarial Robustness

4 code implementations NeurIPS 2019 Yair Carmon, aditi raghunathan, Ludwig Schmidt, Percy Liang, John C. Duchi

We demonstrate, theoretically and empirically, that adversarial robustness can significantly benefit from semisupervised learning.

Adversarial Robustness Robust classification

Adversarial Training Can Hurt Generalization

no code implementations ICML Workshop Deep_Phenomen 2019 Aditi Raghunathan, Sang Michael Xie, Fanny Yang, John C. Duchi, Percy Liang

While adversarial training can improve robust accuracy (against an adversary), it sometimes hurts standard accuracy (when there is no adversary).

Necessary and Sufficient Geometries for Gradient Methods

no code implementations NeurIPS 2019 Daniel Levy, John C. Duchi

We study the impact of the constraint set and gradient geometry on the convergence of online and stochastic methods for convex optimization, providing a characterization of the geometries for which stochastic gradient and adaptive gradient methods are (minimax) optimal.

When Covariate-shifted Data Augmentation Increases Test Error And How to Fix It

no code implementations25 Sep 2019 Sang Michael Xie*, Aditi Raghunathan*, Fanny Yang, John C. Duchi, Percy Liang

Empirically, data augmentation sometimes improves and sometimes hurts test error, even when only adding points with labels from the true conditional distribution that the hypothesis class is expressive enough to fit.

Data Augmentation regression

Lower Bounds for Non-Convex Stochastic Optimization

no code implementations5 Dec 2019 Yossi Arjevani, Yair Carmon, John C. Duchi, Dylan J. Foster, Nathan Srebro, Blake Woodworth

We lower bound the complexity of finding $\epsilon$-stationary points (with gradient norm at most $\epsilon$) using stochastic first-order methods.

Stochastic Optimization

Near Instance-Optimality in Differential Privacy

no code implementations16 May 2020 Hilal Asi, John C. Duchi

We develop two notions of instance optimality in differential privacy, inspired by classical statistical theory: one by defining a local minimax risk and the other by considering unbiased mechanisms and analogizing the Cramer-Rao bound, and we show that the local modulus of continuity of the estimand of interest completely determines these quantities.

regression

Second-Order Information in Non-Convex Stochastic Optimization: Power and Limitations

no code implementations24 Jun 2020 Yossi Arjevani, Yair Carmon, John C. Duchi, Dylan J. Foster, Ayush Sekhari, Karthik Sridharan

We design an algorithm which finds an $\epsilon$-approximate stationary point (with $\|\nabla F(x)\|\le \epsilon$) using $O(\epsilon^{-3})$ stochastic gradient and Hessian-vector products, matching guarantees that were previously available only under a stronger assumption of access to multiple queries with the same random seed.

Second-order methods Stochastic Optimization

Robust Validation: Confident Predictions Even When Distributions Shift

no code implementations10 Aug 2020 Maxime Cauchois, Suyash Gupta, Alnur Ali, John C. Duchi

One strategy -- coming from robust statistics and optimization -- is thus to build a model robust to distributional perturbations.

valid

Large-Scale Methods for Distributionally Robust Optimization

1 code implementation NeurIPS 2020 Daniel Levy, Yair Carmon, John C. Duchi, Aaron Sidford

We propose and analyze algorithms for distributionally robust optimization of convex losses with conditional value at risk (CVaR) and $\chi^2$ divergence uncertainty sets.

Instance-optimality in differential privacy via approximate inverse sensitivity mechanisms

no code implementations NeurIPS 2020 Hilal Asi, John C. Duchi

We study and provide instance-optimal algorithms in differential privacy by extending and approximating the inverse sensitivity mechanism.

Minibatch Stochastic Approximate Proximal Point Methods

no code implementations NeurIPS 2020 Hilal Asi, Karan Chadha, Gary Cheng, John C. Duchi

In contrast to standard stochastic gradient methods, these methods may have linear speedup in the minibatch setting even for non-smooth functions.

Accelerated, Optimal, and Parallel: Some Results on Model-Based Stochastic Optimization

no code implementations7 Jan 2021 Karan Chadha, Gary Cheng, John C. Duchi

We extend the Approximate-Proximal Point (aProx) family of model-based methods for solving stochastic convex optimization problems, including stochastic subgradient, proximal point, and bundle methods, to the minibatch and accelerated setting.

Stochastic Optimization

The Lifecycle of a Statistical Model: Model Failure Detection, Identification, and Refitting

no code implementations8 Feb 2022 Alnur Ali, Maxime Cauchois, John C. Duchi

The statistical machine learning community has demonstrated considerable resourcefulness over the years in developing highly expressive tools for estimation, prediction, and inference.

Time Series Time Series Analysis

PPI++: Efficient Prediction-Powered Inference

1 code implementation2 Nov 2023 Anastasios N. Angelopoulos, John C. Duchi, Tijana Zrnic

We present PPI++: a computationally lightweight methodology for estimation and inference based on a small labeled dataset and a typically much larger dataset of machine-learning predictions.

Predictive Inference in Multi-environment Scenarios

no code implementations25 Mar 2024 John C. Duchi, Suyash Gupta, Kuanhao Jiang, Pragya Sur

We address the challenge of constructing valid confidence intervals and sets in problems of prediction across multiple environments.

valid

Cannot find the paper you are looking for? You can Submit a new open access paper.