1 code implementation • 14 Jan 2025 • Anastasios N. Angelopoulos, Michael I. Jordan, Ryan J. Tibshirani
We present a new perspective on online learning that we refer to as gradient equilibrium: a sequence of iterates achieves gradient equilibrium if the average of gradients of losses along the sequence converges to zero.
1 code implementation • 2 Oct 2024 • Pratik Patil, Jin-Hong Du, Ryan J. Tibshirani
Common practice in modern machine learning involves fitting a large number of parameters relative to the number of observations.
1 code implementation • 1 Apr 2024 • Pratik Patil, Jin-Hong Du, Ryan J. Tibshirani
We study the behavior of optimal ridge regularization and optimal ridge risk for out-of-distribution prediction, where the test distribution deviates arbitrarily from the train distribution.
no code implementations • 26 Feb 2024 • Pratik Patil, Yuchen Wu, Ryan J. Tibshirani
We analyze the statistical properties of generalized cross-validation (GCV) and leave-one-out cross-validation (LOOCV) applied to early-stopped gradient descent (GD) in high-dimensional least squares regression.
no code implementations • 5 Sep 2023 • Seunghoon Paik, Michael Celentano, Alden Green, Ryan J. Tibshirani
Integral probability metrics (IPMs) constitute a general class of nonparametric two-sample tests that are based on maximizing the mean difference between samples from one distribution $P$ versus another $Q$, over all choices of data transformations $f$ living in some function space $\mathcal{F}$.
1 code implementation • 31 Jul 2023 • Anastasios N. Angelopoulos, Emmanuel J. Candes, Ryan J. Tibshirani
We study the problem of uncertainty quantification for time series prediction, with the goal of providing easy-to-use algorithms with formal guarantees.
1 code implementation • NeurIPS 2023 • Tiffany Ding, Anastasios N. Angelopoulos, Stephen Bates, Michael I. Jordan, Ryan J. Tibshirani
Standard conformal prediction methods provide a marginal coverage guarantee, which means that for a random test point, the conformal prediction set contains the true label with a user-specified probability.
no code implementations • 30 Dec 2022 • Addison J. Hu, Alden Green, Ryan J. Tibshirani
We study an estimator that forms the Voronoi diagram of the design points, and then solves an optimization problem that regularizes according to a certain discrete notion of total variation (TV): the sum of weighted absolute differences of parameters $\theta_i,\theta_j$ (which estimate the function values $f_0(x_i), f_0(x_j)$) at all neighboring cells $i, j$ in the Voronoi diagram.
no code implementations • 29 Dec 2021 • Veeranjaneyulu Sadhanala, Yu-Xiang Wang, Addison J. Hu, Ryan J. Tibshirani
We study a multivariate version of trend filtering, called Kronecker trend filtering or KTF, for the case in which the design points form a lattice in $d$ dimensions.
no code implementations • 12 Dec 2021 • Aaron Rumack, Ryan J. Tibshirani, Roni Rosenfeld
We apply this recalibration method to the 27 influenza forecasters in the FluSight Network and show that recalibration reliably improves forecast accuracy and calibration.
no code implementations • 14 Nov 2021 • Alden Green, Sivaraman Balakrishnan, Ryan J. Tibshirani
We also show that PCR-LE is \emph{manifold adaptive}: that is, we consider the situation where the design is supported on a manifold of small intrinsic dimension $m$, and give upper bounds establishing that PCR-LE achieves the faster minimax estimation ($n^{-2s/(2s + m)}$) and testing ($n^{-4s/(4s + m)}$) rates of convergence.
no code implementations • 3 Jun 2021 • Alden Green, Sivaraman Balakrishnan, Ryan J. Tibshirani
In this paper we study the statistical properties of Laplacian smoothing, a graph-based approach to nonparametric regression.
1 code implementation • 26 Feb 2021 • Rasool Fakoor, Taesup Kim, Jonas Mueller, Alexander J. Smola, Ryan J. Tibshirani
Quantile regression is a fundamental problem in statistical learning motivated by a need to quantify uncertainty in predictions, or to model a diverse population without being overly reductive.
no code implementations • ICML 2020 • Alnur Ali, Edgar Dobriban, Ryan J. Tibshirani
We study the implicit regularization of mini-batch stochastic gradient descent, when applied to the fundamental problem of least squares regression.
no code implementations • 9 Mar 2020 • Ryan J. Tibshirani
This paper reviews a class of univariate piecewise polynomial functions known as discrete splines, which share properties analogous to the better-known class of spline functions, but where continuity in derivatives is replaced by (a suitable notion of) continuity in divided differences.
Statistics Theory Numerical Analysis Numerical Analysis Methodology Statistics Theory
no code implementations • 28 Feb 2020 • Benjamin G. Stokell, Rajen D. Shah, Ryan J. Tibshirani
We provide an algorithm for exact and efficient computation of the global minimum of the resulting nonconvex objective in the case with a single variable with potentially many levels, and use this within a block coordinate descent procedure in the multivariate case.
no code implementations • 8 May 2019 • Rina Foygel Barber, Emmanuel J. Candes, Aaditya Ramdas, Ryan J. Tibshirani
This paper introduces the jackknife+, which is a novel method for constructing predictive confidence intervals.
Methodology
1 code implementation • NeurIPS 2019 • Rina Foygel Barber, Emmanuel J. Candes, Aaditya Ramdas, Ryan J. Tibshirani
We extend conformal prediction methodology beyond the case of exchangeable data.
Methodology
no code implementations • 24 Mar 2019 • Veeranjaneyulu Sadhanala, Yu-Xiang Wang, Aaditya Ramdas, Ryan J. Tibshirani
We present an extension of the Kolmogorov-Smirnov (KS) two-sample test, which can be more sensitive to differences in the tails.
no code implementations • 19 Mar 2019 • Trevor Hastie, Andrea Montanari, Saharon Rosset, Ryan J. Tibshirani
Interpolators -- estimators that achieve zero training error -- have attracted growing attention in machine learning, mainly because state-of-the art neural networks appear to be models of this type.
no code implementations • 12 Mar 2019 • Rina Foygel Barber, Emmanuel J. Candès, Aaditya Ramdas, Ryan J. Tibshirani
We consider the problem of distribution-free predictive inference, with the goal of producing predictive coverage guarantees that hold conditionally rather than marginally.
Statistics Theory Statistics Theory
no code implementations • 23 Oct 2018 • Alnur Ali, J. Zico Kolter, Ryan J. Tibshirani
Our primary focus is to compare the risk of gradient flow to that of ridge regression.
no code implementations • NeurIPS 2017 • Veeranjaneyulu Sadhanala, Yu-Xiang Wang, James L. Sharpnack, Ryan J. Tibshirani
To move past this, we define two new higher-order TV classes, based on two ways of compiling the discrete derivatives of a parameter across the nodes.
no code implementations • NeurIPS 2017 • Kevin Lin, James L. Sharpnack, Alessandro Rinaldo, Ryan J. Tibshirani
In the 1-dimensional multiple changepoint detection problem, we derive a new fast error rate for the fused lasso estimator, under the assumption that the mean vector has a sparse number of changepoints.
1 code implementation • 27 Jul 2017 • Trevor Hastie, Robert Tibshirani, Ryan J. Tibshirani
In exciting new work, Bertsimas et al. (2016) showed that the classical best subset selection problem in regression modeling can be formulated as a mixed integer optimization (MIO) problem.
Methodology Computation
no code implementations • 16 Feb 2017 • Veeranjaneyulu Sadhanala, Ryan J. Tibshirani
We study additive models built with trend filtering, i. e., additive models whose components are each regularized by the (discrete) total variation of their $k$th (discrete) derivative, for a chosen integer $k \geq 0$.
1 code implementation • 11 Jun 2016 • Sangwon Hyun, Max G'Sell, Ryan J. Tibshirani
Leveraging a sequential characterization of this path from Tibshirani & Taylor (2011), and recent advances in post-selection inference from Lee et al. (2016), Tibshirani et al. (2016), we develop exact hypothesis tests and confidence intervals for linear contrasts of the underlying mean vector, conditioned on any model selection event along the generalized lasso path (assuming Gaussian errors in the observations).
Methodology
5 code implementations • 14 Apr 2016 • Jing Lei, Max G'Sell, Alessandro Rinaldo, Ryan J. Tibshirani, Larry Wasserman
In the spirit of reproducibility, all of our empirical results can also be easily (re)generated using this package.
no code implementations • 29 Jan 2015 • Samrachana Adhikari, Fabrizio Lecci, James T. Becker, Brian W. Junker, Lewis H. Kuller, Oscar L. Lopez, Ryan J. Tibshirani
We study regularized estimation in high-dimensional longitudinal classification problems, using the lasso and fused lasso regularizers.
no code implementations • 4 Dec 2014 • Yen-Chi Chen, Christopher R. Genovese, Ryan J. Tibshirani, Larry Wasserman
Modal regression estimates the local modes of the distribution of $Y$ given $X=x$, instead of the mean, as in the usual regression sense, and can hence reveal important structure missed by usual regression methods.
no code implementations • 28 Oct 2014 • Yu-Xiang Wang, James Sharpnack, Alex Smola, Ryan J. Tibshirani
We introduce a family of adaptive estimators on graphs, based on penalizing the $\ell_1$ norm of discrete graph differences.
no code implementations • 25 Aug 2014 • Ryan J. Tibshirani
Forward stagewise regression follows a very simple strategy for constructing a sequence of sparse regression estimates: it starts with all coefficients equal to zero, and iteratively updates the coefficient (by a small amount $\epsilon$) of the variable that achieves the maximal absolute inner product with the current residual.
4 code implementations • 9 Jun 2014 • Aaditya Ramdas, Ryan J. Tibshirani
This paper presents a fast and robust algorithm for trend filtering, a recently developed nonparametric regression tool.
no code implementations • 3 May 2014 • Yu-Xiang Wang, Alex Smola, Ryan J. Tibshirani
We study a novel spline-like basis, which we name the "falling factorial basis", bearing many similarities to the classic truncated power basis.
1 code implementation • 16 Jan 2014 • Ryan J. Tibshirani, Jonathan Taylor, Richard Lockhart, Robert Tibshirani
We propose new inference tools for forward stepwise regression, least angle regression, and the lasso.
Methodology 62F03, 62G15
1 code implementation • 10 Apr 2013 • Ryan J. Tibshirani
We also provide theoretical support for these empirical findings; most notably, we prove that (with the right choice of tuning parameter) the trend filtering estimate converges to the true underlying function at the minimax rate for functions whose $k$th derivative is of bounded variation.
no code implementations • 30 Jan 2013 • Richard Lockhart, Jonathan Taylor, Ryan J. Tibshirani, Robert Tibshirani
We propose a simple test statistic based on lasso fitted values, called the covariance test statistic, and show that when the true model is linear, this statistic has an $\operatorname {Exp}(1)$ asymptotic distribution under the null hypothesis (the null being that all truly active variables are contained in the current lasso model).
Statistics Theory Methodology Statistics Theory