Search Results for author: Martin J. Wainwright

Found 89 papers, 12 papers with code

Provable Benefits of Actor-Critic Methods for Offline Reinforcement Learning

no code implementations19 Aug 2021 Andrea Zanette, Martin J. Wainwright, Emma Brunskill

Actor-critic methods are widely used in offline reinforcement learning practice, but are not so well-understood theoretically.

Near-optimal inference in adaptive linear regression

no code implementations5 Jul 2021 Koulik Khamaru, Yash Deshpande, Lester Mackey, Martin J. Wainwright

We establish an asymptotic normality property for our proposed online debiasing estimator under mild conditions on the data collection process, and provide asymptotically exact confidence intervals.

Active Learning Time Series

Instance-optimality in optimal value estimation: Adaptivity via variance-reduced Q-learning

no code implementations28 Jun 2021 Koulik Khamaru, Eric Xia, Martin J. Wainwright, Michael I. Jordan

Various algorithms in reinforcement learning exhibit dramatic variability in their convergence rates and ultimate accuracy as a function of the problem structure.

Q-Learning

Preference learning along multiple criteria: A game-theoretic perspective

no code implementations NeurIPS 2020 Kush Bhatia, Ashwin Pananjady, Peter L. Bartlett, Anca D. Dragan, Martin J. Wainwright

Finally, we showcase the practical utility of our framework in a user study on autonomous driving, where we find that the Blackwell winner outperforms the von Neumann winner for the overall preferences.

Autonomous Driving

Minimax Off-Policy Evaluation for Multi-Armed Bandits

no code implementations19 Jan 2021 Cong Ma, Banghua Zhu, Jiantao Jiao, Martin J. Wainwright

Second, when the behavior policy is unknown, we analyze performance in terms of the competitive ratio, thereby revealing a fundamental gap between the settings of known and unknown behavior policies.

Multi-Armed Bandits

Optimal oracle inequalities for solving projected fixed-point equations

no code implementations9 Dec 2020 Wenlong Mou, Ashwin Pananjady, Martin J. Wainwright

Linear fixed point equations in Hilbert spaces arise in a variety of settings, including reinforcement learning, and computational methods for solving differential and integral equations.

ROOT-SGD: Sharp Nonasymptotics and Asymptotic Efficiency in a Single Algorithm

no code implementations28 Aug 2020 Chris Junchi Li, Wenlong Mou, Martin J. Wainwright, Michael. I. Jordan

The theory and practice of stochastic optimization has focused on stochastic gradient descent (SGD) in recent years, retaining the basic first-order stochastic nature of SGD while aiming to improve it via mechanisms such as averaging, momentum, and variance reduction.

Stochastic Optimization

Revisiting complexity and the bias-variance tradeoff

1 code implementation17 Jun 2020 Raaz Dwivedi, Chandan Singh, Bin Yu, Martin J. Wainwright

We derive closed-form expressions for MDL-COMP and show that for a dataset with $n$ observations and $d$ parameters it is \emph{not always} equal to $d/n$, and is a function of the singular values of the design matrix and the signal-to-noise ratio.

Instability, Computational Efficiency and Statistical Accuracy

no code implementations22 May 2020 Nhat Ho, Koulik Khamaru, Raaz Dwivedi, Martin J. Wainwright, Michael. I. Jordan, Bin Yu

Many statistical estimators are defined as the fixed point of a data-dependent operator, with estimators based on minimizing a cost function being an important special case.

FedSplit: An algorithmic framework for fast federated optimization

no code implementations NeurIPS 2020 Reese Pathak, Martin J. Wainwright

Motivated by federated learning, we consider the hub-and-spoke model of distributed optimization in which a central authority coordinates the computation of a solution among many agents while limiting communication.

Distributed Optimization Federated Learning

Lower bounds in multiple testing: A framework based on derandomized proxies

no code implementations7 May 2020 Max Rabinovich, Michael. I. Jordan, Martin J. Wainwright

A line of more recent work in multiple testing has begun to investigate the tradeoffs between the FDR and FNR and to provide lower bounds on the performance of procedures that depend on the model structure.

On Linear Stochastic Approximation: Fine-grained Polyak-Ruppert and Non-Asymptotic Concentration

no code implementations9 Apr 2020 Wenlong Mou, Chris Junchi Li, Martin J. Wainwright, Peter L. Bartlett, Michael. I. Jordan

When the matrix $\bar{A}$ is Hurwitz, we prove a central limit theorem (CLT) for the averaged iterates with fixed step size and number of iterations going to infinity.

Is Temporal Difference Learning Optimal? An Instance-Dependent Analysis

no code implementations16 Mar 2020 Koulik Khamaru, Ashwin Pananjady, Feng Ruan, Martin J. Wainwright, Michael. I. Jordan

We address the problem of policy evaluation in discounted Markov decision processes, and provide instance-dependent guarantees on the $\ell_\infty$-error under a generative model.

Sampling for Bayesian Mixture Models: MCMC with Polynomial-Time Mixing

no code implementations11 Dec 2019 Wenlong Mou, Nhat Ho, Martin J. Wainwright, Peter L. Bartlett, Michael. I. Jordan

We study the problem of sampling from the power posterior distribution in Bayesian Gaussian mixture models, a robust version of the classical posterior.

An Efficient Sampling Algorithm for Non-smooth Composite Potentials

no code implementations1 Oct 2019 Wenlong Mou, Nicolas Flammarion, Martin J. Wainwright, Peter L. Bartlett

We consider the problem of sampling from a density of the form $p(x) \propto \exp(-f(x)- g(x))$, where $f: \mathbb{R}^d \rightarrow \mathbb{R}$ is a smooth and strongly convex function and $g: \mathbb{R}^d \rightarrow \mathbb{R}$ is a convex and Lipschitz function.

Instance-dependent $\ell_\infty$-bounds for policy evaluation in tabular reinforcement learning

no code implementations19 Sep 2019 Ashwin Pananjady, Martin J. Wainwright

Markov reward processes (MRPs) are used to model stochastic phenomena arising in operations research, control engineering, robotics, and artificial intelligence, as well as communication and transportation networks.

High-Order Langevin Diffusion Yields an Accelerated MCMC Algorithm

no code implementations28 Aug 2019 Wenlong Mou, Yi-An Ma, Martin J. Wainwright, Peter L. Bartlett, Michael. I. Jordan

We propose a Markov chain Monte Carlo (MCMC) algorithm based on third-order Langevin dynamics for sampling from distributions with log-concave and smooth densities.

Variance-reduced $Q$-learning is minimax optimal

no code implementations11 Jun 2019 Martin J. Wainwright

We introduce and analyze a form of variance-reduced $Q$-learning.

Q-Learning

Fast mixing of Metropolized Hamiltonian Monte Carlo: Benefits of multi-step gradients

1 code implementation29 May 2019 Yuansi Chen, Raaz Dwivedi, Martin J. Wainwright, Bin Yu

This bound gives a precise quantification of the faster convergence of Metropolized HMC relative to simpler MCMC algorithms such as the Metropolized random walk, or Metropolized Langevin algorithm.

Stochastic approximation with cone-contractive operators: Sharp $\ell_\infty$-bounds for $Q$-learning

no code implementations15 May 2019 Martin J. Wainwright

Motivated by the study of $Q$-learning algorithms in reinforcement learning, we study a class of stochastic approximation procedures based on operators that satisfy monotonicity and quasi-contractivity conditions with respect to an underlying cone.

Q-Learning

HopSkipJumpAttack: A Query-Efficient Decision-Based Attack

3 code implementations3 Apr 2019 Jianbo Chen, Michael. I. Jordan, Martin J. Wainwright

We develop HopSkipJumpAttack, a family of algorithms based on a novel estimate of the gradient direction using binary information at the decision boundary.

Adversarial Attack

Sharp Analysis of Expectation-Maximization for Weakly Identifiable Models

no code implementations1 Feb 2019 Raaz Dwivedi, Nhat Ho, Koulik Khamaru, Martin J. Wainwright, Michael. I. Jordan, Bin Yu

We study a class of weakly identifiable location-scale mixture models for which the maximum likelihood estimates based on $n$ i. i. d.

Derivative-Free Methods for Policy Optimization: Guarantees for Linear Quadratic Systems

no code implementations20 Dec 2018 Dhruv Malik, Ashwin Pananjady, Kush Bhatia, Koulik Khamaru, Peter L. Bartlett, Martin J. Wainwright

We focus on characterizing the convergence rate of these methods when applied to linear-quadratic systems, and study various settings of driving noise and reward feedback.

Theoretical guarantees for EM under misspecified Gaussian mixture models

no code implementations NeurIPS 2018 Raaz Dwivedi, Nhật Hồ, Koulik Khamaru, Martin J. Wainwright, Michael. I. Jordan

We provide two classes of theoretical guarantees: first, we characterize the bias introduced due to the misspecification; and second, we prove that population EM converges at a geometric rate to the model projection under a suitable initialization condition.

Singularity, Misspecification, and the Convergence Rate of EM

no code implementations1 Oct 2018 Raaz Dwivedi, Nhat Ho, Koulik Khamaru, Michael. I. Jordan, Martin J. Wainwright, Bin Yu

A line of recent work has analyzed the behavior of the Expectation-Maximization (EM) algorithm in the well-specified setting, in which the population likelihood is locally strongly concave around its maximizing argument.

Towards Optimal Estimation of Bivariate Isotonic Matrices with Unknown Permutations

no code implementations25 Jun 2018 Cheng Mao, Ashwin Pananjady, Martin J. Wainwright

Many applications, including rank aggregation, crowd-labeling, and graphon estimation, can be modeled in terms of a bivariate isotonic matrix with unknown permutations acting on its rows and/or columns.

Graphon Estimation

Convergence guarantees for a class of non-convex and non-smooth optimization problems

no code implementations ICML 2018 Koulik Khamaru, Martin J. Wainwright

We also show that our algorithms can escape strict saddle points for a class of non-smooth functions, thereby generalizing known results for smooth functions.

Density Estimation

Breaking the $1/\sqrt{n}$ Barrier: Faster Rates for Permutation-based Models in Polynomial Time

no code implementations27 Feb 2018 Cheng Mao, Ashwin Pananjady, Martin J. Wainwright

Many applications, including rank aggregation and crowd-labeling, can be modeled in terms of a bivariate isotonic matrix with unknown permutations acting on its rows and columns.

Log-concave sampling: Metropolis-Hastings algorithms are fast

1 code implementation8 Jan 2018 Raaz Dwivedi, Yuansi Chen, Martin J. Wainwright, Bin Yu

Relative to known guarantees for the unadjusted Langevin algorithm (ULA), our bounds show that the use of an accept-reject step in MALA leads to an exponentially improved dependence on the error-tolerance.

Approximate Ranking from Pairwise Comparisons

no code implementations4 Jan 2018 Reinhard Heckel, Max Simchowitz, Kannan Ramchandran, Martin J. Wainwright

Accordingly, we study the problem of finding approximate rankings from pairwise comparisons.

Fast MCMC sampling algorithms on polytopes

2 code implementations23 Oct 2017 Yuansi Chen, Raaz Dwivedi, Martin J. Wainwright, Bin Yu

We propose and analyze two new MCMC sampling algorithms, the Vaidya walk and the John walk, for generating samples from the uniform distribution over a polytope.

Online control of the false discovery rate with decaying memory

1 code implementation NeurIPS 2017 Aaditya Ramdas, Fanny Yang, Martin J. Wainwright, Michael. I. Jordan

In the online multiple testing problem, p-values corresponding to different null hypotheses are observed one by one, and the decision of whether or not to reject the current hypothesis must be made immediately, after which the next p-value is observed.

Unity

DAGGER: A sequential algorithm for FDR control on DAGs

1 code implementation29 Sep 2017 Aaditya Ramdas, Jianbo Chen, Martin J. Wainwright, Michael. I. Jordan

We propose a linear-time, single-pass, top-down algorithm for multiple testing on directed acyclic graphs (DAGs), where nodes represent hypotheses and edges specify a partial ordering in which hypotheses must be tested.

Model Selection

Low Permutation-rank Matrices: Structural Properties and Noisy Completion

no code implementations1 Sep 2017 Nihar B. Shah, Sivaraman Balakrishnan, Martin J. Wainwright

We consider the problem of noisy matrix completion, in which the goal is to reconstruct a structured matrix whose entries are partially observed in noise.

Matrix Completion

Worst-case vs Average-case Design for Estimation from Fixed Pairwise Comparisons

no code implementations19 Jul 2017 Ashwin Pananjady, Cheng Mao, Vidya Muthukumar, Martin J. Wainwright, Thomas A. Courtade

We show that when the assignment of items to the topology is arbitrary, these permutation-based models, unlike their parametric counterparts, do not admit consistent estimation for most comparison topologies used in practice.

Early stopping for kernel boosting algorithms: A general analysis with localized complexities

no code implementations NeurIPS 2017 Yuting Wei, Fanny Yang, Martin J. Wainwright

Early stopping of iterative algorithms is a widely-used form of regularization in statistics, commonly used in conjunction with boosting and related gradient-type algorithms.

Kernel Feature Selection via Conditional Covariance Minimization

1 code implementation NeurIPS 2017 Jianbo Chen, Mitchell Stern, Martin J. Wainwright, Michael. I. Jordan

We propose a method for feature selection that employs kernel-based measures of independence to find a subset of covariates that is maximally predictive of the response.

Dimensionality Reduction Feature Selection

A framework for Multi-A(rmed)/B(andit) testing with online FDR control

1 code implementation NeurIPS 2017 Fanny Yang, Aaditya Ramdas, Kevin Jamieson, Martin J. Wainwright

We propose an alternative framework to existing setups for controlling false alarms when multiple A/B tests are run over time.

Denoising Linear Models with Permuted Data

no code implementations24 Apr 2017 Ashwin Pananjady, Martin J. Wainwright, Thomas A. Courtade

The multivariate linear regression model with shuffled data and additive Gaussian noise arises in various correspondence estimation and matching problems.

Denoising

A unified treatment of multiple testing with prior knowledge using the p-filter

no code implementations18 Mar 2017 Aaditya Ramdas, Rina Foygel Barber, Martin J. Wainwright, Michael. I. Jordan

There is a significant literature on methods for incorporating knowledge into multiple testing procedures so as to improve their power and precision.

Convexified Convolutional Neural Networks

1 code implementation ICML 2017 Yuchen Zhang, Percy Liang, Martin J. Wainwright

For learning two-layer convolutional neural networks, we prove that the generalization error obtained by a convexified CNN converges to that of the best possible CNN.

Denoising

Local Maxima in the Likelihood of Gaussian Mixture Models: Structural Results and Algorithmic Consequences

no code implementations NeurIPS 2016 Chi Jin, Yuchen Zhang, Sivaraman Balakrishnan, Martin J. Wainwright, Michael Jordan

Our first main result shows that the population likelihood function has bad local maxima even in the special case of equally-weighted mixtures of well-separated and spherical Gaussians.

Linear Regression with an Unknown Permutation: Statistical and Computational Limits

no code implementations9 Aug 2016 Ashwin Pananjady, Martin J. Wainwright, Thomas A. Courtade

Consider a noisy linear observation model with an unknown permutation, based on observing $y = \Pi^* A x^* + w$, where $x^* \in \mathbb{R}^d$ is an unknown vector, $\Pi^*$ is an unknown $n \times n$ permutation matrix, and $w \in \mathbb{R}^n$ is additive Gaussian noise.

A Permutation-based Model for Crowd Labeling: Optimal Estimation and Robustness

no code implementations30 Jun 2016 Nihar B. Shah, Sivaraman Balakrishnan, Martin J. Wainwright

The task of aggregating and denoising crowd-labeled data has gained increased significance with the advent of crowdsourcing platforms and massive datasets.

Denoising

Active Ranking from Pairwise Comparisons and when Parametric Assumptions Don't Help

no code implementations28 Jun 2016 Reinhard Heckel, Nihar B. Shah, Kannan Ramchandran, Martin J. Wainwright

We first analyze a sequential ranking algorithm that counts the number of comparisons won, and uses these counts to decide whether to stop, or to compare another pair of items, chosen based on confidence intervals specified by the data collected up to that point.

Function-Specific Mixing Times and Concentration Away from Equilibrium

no code implementations6 May 2016 Maxim Rabinovich, Aaditya Ramdas, Michael. I. Jordan, Martin J. Wainwright

These results show that it is possible for empirical expectations of functions to concentrate long before the underlying chain has mixed in the classical sense, and we show that the concentration rates we achieve are optimal up to constants.

On kernel methods for covariates that are rankings

no code implementations25 Mar 2016 Horia Mania, Aaditya Ramdas, Martin J. Wainwright, Michael. I. Jordan, Benjamin Recht

This paper studies the use of reproducing kernel Hilbert space methods for learning from permutation-valued features.

Feeling the Bern: Adaptive Estimators for Bernoulli Probabilities of Pairwise Comparisons

no code implementations22 Mar 2016 Nihar B. Shah, Sivaraman Balakrishnan, Martin J. Wainwright

Second, we show that a regularized least squares estimator can achieve a poly-logarithmic adaptivity index, thereby demonstrating a $\sqrt{n}$-gap between optimal and computationally achievable adaptivity.

Asymptotic behavior of $\ell_p$-based Laplacian regularization in semi-supervised learning

no code implementations2 Mar 2016 Ahmed El Alaoui, Xiang Cheng, Aaditya Ramdas, Martin J. Wainwright, Michael. I. Jordan

Together, these properties show that $p = d+1$ is an optimal choice, yielding a function estimate $\hat{f}$ that is both smooth and non-degenerate, while remaining maximally sensitive to $P$.

Simple, Robust and Optimal Ranking from Pairwise Comparisons

no code implementations30 Dec 2015 Nihar B. Shah, Martin J. Wainwright

We consider data in the form of pairwise comparisons of n items, with the goal of precisely identifying the top k items for some value of k < n, or alternatively, recovering a ranking of all the items.

Statistical and Computational Guarantees for the Baum-Welch Algorithm

no code implementations27 Dec 2015 Fanny Yang, Sivaraman Balakrishnan, Martin J. Wainwright

By exploiting this characterization, we provide non-asymptotic finite sample guarantees on the Baum-Welch updates, guaranteeing geometric convergence to a small ball of radius on the order of the minimax rate around a global optimum.

Speech Recognition Time Series

Learning Halfspaces and Neural Networks with Random Initialization

no code implementations25 Nov 2015 Yuchen Zhang, Jason D. Lee, Martin J. Wainwright, Michael. I. Jordan

For loss functions that are $L$-Lipschitz continuous, we present algorithms to learn halfspaces and multi-layer neural networks that achieve arbitrarily small excess risk $\epsilon>0$.

Stochastically Transitive Models for Pairwise Comparisons: Statistical and Computational Issues

no code implementations19 Oct 2015 Nihar B. Shah, Sivaraman Balakrishnan, Adityanand Guntuboyina, Martin J. Wainwright

On the other hand, unlike in the BTL and Thurstone models, computing the minimax-optimal estimator in the stochastically transitive model is non-trivial, and we explore various computationally tractable alternatives.

Fast low-rank estimation by projected gradient descent: General statistical and algorithmic guarantees

no code implementations10 Sep 2015 Yudong Chen, Martin J. Wainwright

We provide a simple set of conditions under which projected gradient descent, when given a suitable initialization, converges geometrically to a statistically useful solution.

Graph Clustering Matrix Completion

On the Computational Complexity of High-Dimensional Bayesian Variable Selection

no code implementations29 May 2015 Yun Yang, Martin J. Wainwright, Michael. I. Jordan

We study the computational complexity of Markov chain Monte Carlo (MCMC) methods for high-dimensional Bayesian linear regression under sparsity constraints.

Variable Selection

Newton Sketch: A Linear-time Optimization Algorithm with Linear-Quadratic Convergence

no code implementations9 May 2015 Mert Pilanci, Martin J. Wainwright

We also describe extensions of our methods to programs involving convex constraints that are equipped with self-concordant barriers.

Estimation from Pairwise Comparisons: Sharp Minimax Bounds with Topology Dependence

no code implementations6 May 2015 Nihar B. Shah, Sivaraman Balakrishnan, Joseph Bradley, Abhay Parekh, Kannan Ramchandran, Martin J. Wainwright

Data in the form of pairwise comparisons arises in many domains, including preference elicitation, sporting competitions, and peer grading among others.

Distributed Estimation of Generalized Matrix Rank: Efficient Algorithms and Lower Bounds

no code implementations5 Feb 2015 Yuchen Zhang, Martin J. Wainwright, Michael. I. Jordan

We study the following generalized matrix rank estimation problem: given an $n \times n$ matrix and a constant $c \geq 0$, estimate the number of eigenvalues that are greater than $c$.

Randomized sketches for kernels: Fast and optimal non-parametric regression

no code implementations25 Jan 2015 Yun Yang, Mert Pilanci, Martin J. Wainwright

Kernel ridge regression (KRR) is a standard method for performing non-parametric regression over reproducing kernel Hilbert spaces.

Support recovery without incoherence: A case for nonconvex regularization

no code implementations17 Dec 2014 Po-Ling Loh, Martin J. Wainwright

We demonstrate that the primal-dual witness proof method may be used to establish variable selection consistency and $\ell_\infty$-bounds for sparse regression problems, even when the loss function and/or regularizer are nonconvex.

Variable Selection

Iterative Hessian sketch: Fast and accurate solution approximation for constrained least-squares

no code implementations3 Nov 2014 Mert Pilanci, Martin J. Wainwright

We study randomized sketching methods for approximately solving least-squares problem with a general convex constraint.

Statistical guarantees for the EM algorithm: From population to sample-based analysis

no code implementations9 Aug 2014 Sivaraman Balakrishnan, Martin J. Wainwright, Bin Yu

Leveraging this characterization, we then provide non-asymptotic guarantees on the EM and gradient EM algorithms when applied to a finite set of samples.

Optimality guarantees for distributed statistical estimation

no code implementations5 May 2014 John C. Duchi, Michael. I. Jordan, Martin J. Wainwright, Yuchen Zhang

Large data sets often require performing distributed statistical estimation, with a full data set split across multiple machines and limited communication between machines.

Randomized Sketches of Convex Programs with Sharp Guarantees

no code implementations29 Apr 2014 Mert Pilanci, Martin J. Wainwright

We analyze RP-based approximations of convex programs, in which the original optimization problem is approximated by the solution of a lower-dimensional problem.

Dimensionality Reduction

The geometry of kernelized spectral clustering

no code implementations29 Apr 2014 Geoffrey Schiebinger, Martin J. Wainwright, Bin Yu

As a corollary we control the fraction of samples mislabeled by spectral clustering under finite mixtures with nonparametric components.

Optimal rates for zero-order convex optimization: the power of two function evaluations

no code implementations7 Dec 2013 John C. Duchi, Michael. I. Jordan, Martin J. Wainwright, Andre Wibisono

We consider derivative-free algorithms for stochastic and non-stochastic convex optimization problems that use only function values rather than gradients.

Local Privacy and Minimax Bounds: Sharp Rates for Probability Estimation

no code implementations NeurIPS 2013 John Duchi, Martin J. Wainwright, Michael. I. Jordan

We provide a detailed study of the estimation of probability distributions---discrete and continuous---in a stringent setting in which data is kept private even from the statistician.

Survey Sampling

Information-theoretic lower bounds for distributed statistical estimation with communication constraints

no code implementations NeurIPS 2013 Yuchen Zhang, John Duchi, Michael. I. Jordan, Martin J. Wainwright

We establish minimax risk lower bounds for distributed statistical estimation given a budget $B$ of the total number of bits that may be communicated.

General Classification

Early stopping and non-parametric regression: An optimal data-dependent stopping rule

no code implementations15 Jun 2013 Garvesh Raskutti, Martin J. Wainwright, Bin Yu

The strategy of early stopping is a regularization technique based on choosing a stopping time for an iterative algorithm.

Divide and Conquer Kernel Ridge Regression: A Distributed Algorithm with Minimax Optimal Rates

no code implementations22 May 2013 Yuchen Zhang, John C. Duchi, Martin J. Wainwright

We establish optimal convergence rates for a decomposition-based scalable approach to kernel ridge regression.

Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima

no code implementations NeurIPS 2013 Po-Ling Loh, Martin J. Wainwright

We provide novel theoretical results regarding local optima of regularized $M$-estimators, allowing for nonconvexity in both loss and penalty functions.

Structure estimation for discrete graphical models: Generalized covariance matrices and their inverses

no code implementations NeurIPS 2012 Po-Ling Loh, Martin J. Wainwright

We show that for certain graph structures, the support of the inverse covariance matrix of indicator variables on the vertices of a graph reflects the conditional independence structure of the graph.

Finite Sample Convergence Rates of Zero-Order Stochastic Optimization Methods

no code implementations NeurIPS 2012 Andre Wibisono, Martin J. Wainwright, Michael. I. Jordan, John C. Duchi

We consider derivative-free algorithms for stochastic optimization problems that use only noisy function values rather than gradients, analyzing their finite-sample convergence rates.

Stochastic Optimization

Communication-Efficient Algorithms for Statistical Optimization

no code implementations NeurIPS 2012 Yuchen Zhang, Martin J. Wainwright, John C. Duchi

The first algorithm is an averaging method that distributes the $N$ data samples evenly to $m$ machines, performs separate minimization on each subset, and then averages the estimates.

Learning-To-Rank

Stochastic optimization and sparse statistical recovery: Optimal algorithms for high dimensions

no code implementations NeurIPS 2012 Alekh Agarwal, Sahand Negahban, Martin J. Wainwright

We develop and analyze stochastic optimization algorithms for problems in which the expected loss is strongly convex, and the optimum is (approximately) sparse.

Stochastic Optimization

Privacy Aware Learning

no code implementations NeurIPS 2012 John C. Duchi, Michael. I. Jordan, Martin J. Wainwright

We study statistical risk minimization problems under a privacy model in which the data is kept confidential even from the learner.

High-dimensional regression with noisy and missing data: Provable guarantees with non-convexity

no code implementations NeurIPS 2011 Po-Ling Loh, Martin J. Wainwright

On the statistical side, we provide non-asymptotic bounds that hold with high probability for the cases of noisy, missing, and/or dependent data.

A More Powerful Two-Sample Test in High Dimensions using Random Projection

no code implementations NeurIPS 2011 Miles E. Lopes, Laurent J. Jacob, Martin J. Wainwright

We consider the hypothesis testing problem of detecting a shift between the means of two multivariate normal distributions in the high-dimensional setting, allowing for the data dimension p to exceed the sample size n. Specifically, we propose a new test statistic for the two-sample test of means that integrates a random projection with the classical Hotelling T^2 statistic.

Two-sample testing

Fast global convergence rates of gradient methods for high-dimensional statistical recovery

no code implementations NeurIPS 2010 Alekh Agarwal, Sahand Negahban, Martin J. Wainwright

Many statistical $M$-estimators are based on convex optimization problems formed by the weighted sum of a loss function with a norm-based regularizer.

Distributed Dual Averaging In Networks

no code implementations NeurIPS 2010 Alekh Agarwal, Martin J. Wainwright, John C. Duchi

The goal of decentralized optimization over a network is to optimize a global objective formed by a sum of local (possibly nonsmooth) convex functions using only local computation and communication.

A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers

no code implementations NeurIPS 2009 Sahand Negahban, Bin Yu, Martin J. Wainwright, Pradeep K. Ravikumar

The estimation of high-dimensional parametric models requires imposing some structure on the models, for instance that they be sparse, or that matrix structured parameters have low rank.

Lower bounds on minimax rates for nonparametric regression with additive sparsity and smoothness

no code implementations NeurIPS 2009 Garvesh Raskutti, Bin Yu, Martin J. Wainwright

components from some distribution $\mP$, we determine tight lower bounds on the minimax rate for estimating the regression function with respect to squared $\LTP$ error.

Information-theoretic lower bounds on the oracle complexity of convex optimization

no code implementations NeurIPS 2009 Alekh Agarwal, Martin J. Wainwright, Peter L. Bartlett, Pradeep K. Ravikumar

The extensive use of convex optimization in machine learning and statistics makes such an understanding critical to understand fundamental computational limits of learning and estimation.

Phase transitions for high-dimensional joint support recovery

no code implementations NeurIPS 2008 Sahand Negahban, Martin J. Wainwright

We consider the following instance of transfer learning: given a pair of regression problems, suppose that the regression coefficients share a partially common support, parameterized by the overlap fraction $\overlap$ between the two supports.

Transfer Learning

Loop Series and Bethe Variational Bounds in Attractive Graphical Models

no code implementations NeurIPS 2007 Alan S. Willsky, Erik B. Sudderth, Martin J. Wainwright

Variational methods are frequently used to approximate or bound the partition or likelihood function of a Markov random field.

Cannot find the paper you are looking for? You can Submit a new open access paper.