Search Results for author: Ryan J. Tibshirani

Found 35 papers, 11 papers with code

Optimal Ridge Regularization for Out-of-Distribution Prediction

1 code implementation1 Apr 2024 Pratik Patil, Jin-Hong Du, Ryan J. Tibshirani

We study the behavior of optimal ridge regularization and optimal ridge risk for out-of-distribution prediction, where the test distribution deviates arbitrarily from the train distribution.

regression

Failures and Successes of Cross-Validation for Early-Stopped Gradient Descent

no code implementations26 Feb 2024 Pratik Patil, Yuchen Wu, Ryan J. Tibshirani

We analyze the statistical properties of generalized cross-validation (GCV) and leave-one-out cross-validation (LOOCV) applied to early-stopped gradient descent (GD) in high-dimensional least squares regression.

Prediction Intervals regression

Maximum Mean Discrepancy Meets Neural Networks: The Radon-Kolmogorov-Smirnov Test

no code implementations5 Sep 2023 Seunghoon Paik, Michael Celentano, Alden Green, Ryan J. Tibshirani

Maximum mean discrepancy (MMD) refers to a general class of nonparametric two-sample tests that are based on maximizing the mean difference over samples from one distribution $P$ versus another $Q$, over all choices of data transformations $f$ living in some function space $\mathcal{F}$.

Conformal PID Control for Time Series Prediction

1 code implementation31 Jul 2023 Anastasios N. Angelopoulos, Emmanuel J. Candes, Ryan J. Tibshirani

We study the problem of uncertainty quantification for time series prediction, with the goal of providing easy-to-use algorithms with formal guarantees.

Conformal Prediction Time Series +2

Class-Conditional Conformal Prediction with Many Classes

1 code implementation NeurIPS 2023 Tiffany Ding, Anastasios N. Angelopoulos, Stephen Bates, Michael I. Jordan, Ryan J. Tibshirani

Standard conformal prediction methods provide a marginal coverage guarantee, which means that for a random test point, the conformal prediction set contains the true label with a user-specified probability.

Conformal Prediction

The Voronoigram: Minimax Estimation of Bounded Variation Functions From Scattered Data

no code implementations30 Dec 2022 Addison J. Hu, Alden Green, Ryan J. Tibshirani

We study an estimator that forms the Voronoi diagram of the design points, and then solves an optimization problem that regularizes according to a certain discrete notion of total variation (TV): the sum of weighted absolute differences of parameters $\theta_i,\theta_j$ (which estimate the function values $f_0(x_i), f_0(x_j)$) at all neighboring cells $i, j$ in the Voronoi diagram.

Multivariate Trend Filtering for Lattice Data

no code implementations29 Dec 2021 Veeranjaneyulu Sadhanala, Yu-Xiang Wang, Addison J. Hu, Ryan J. Tibshirani

We study a multivariate version of trend filtering, called Kronecker trend filtering or KTF, for the case in which the design points form a lattice in $d$ dimensions.

Recalibrating probabilistic forecasts of epidemics

no code implementations12 Dec 2021 Aaron Rumack, Ryan J. Tibshirani, Roni Rosenfeld

We apply this recalibration method to the 27 influenza forecasters in the FluSight Network and show that recalibration reliably improves forecast accuracy and calibration.

Minimax Optimal Regression over Sobolev Spaces via Laplacian Eigenmaps on Neighborhood Graphs

no code implementations14 Nov 2021 Alden Green, Sivaraman Balakrishnan, Ryan J. Tibshirani

We also show that PCR-LE is \emph{manifold adaptive}: that is, we consider the situation where the design is supported on a manifold of small intrinsic dimension $m$, and give upper bounds establishing that PCR-LE achieves the faster minimax estimation ($n^{-2s/(2s + m)}$) and testing ($n^{-4s/(4s + m)}$) rates of convergence.

regression

Minimax Optimal Regression over Sobolev Spaces via Laplacian Regularization on Neighborhood Graphs

no code implementations3 Jun 2021 Alden Green, Sivaraman Balakrishnan, Ryan J. Tibshirani

In this paper we study the statistical properties of Laplacian smoothing, a graph-based approach to nonparametric regression.

regression

Flexible Model Aggregation for Quantile Regression

1 code implementation26 Feb 2021 Rasool Fakoor, Taesup Kim, Jonas Mueller, Alexander J. Smola, Ryan J. Tibshirani

Quantile regression is a fundamental problem in statistical learning motivated by a need to quantify uncertainty in predictions, or to model a diverse population without being overly reductive.

Econometrics Prediction Intervals +1

The Implicit Regularization of Stochastic Gradient Flow for Least Squares

no code implementations ICML 2020 Alnur Ali, Edgar Dobriban, Ryan J. Tibshirani

We study the implicit regularization of mini-batch stochastic gradient descent, when applied to the fundamental problem of least squares regression.

regression

Divided Differences, Falling Factorials, and Discrete Splines: Another Look at Trend Filtering and Related Problems

no code implementations9 Mar 2020 Ryan J. Tibshirani

This paper reviews a class of univariate piecewise polynomial functions known as discrete splines, which share properties analogous to the better-known class of spline functions, but where continuity in derivatives is replaced by (a suitable notion of) continuity in divided differences.

Statistics Theory Numerical Analysis Numerical Analysis Methodology Statistics Theory

Modelling High-Dimensional Categorical Data Using Nonconvex Fusion Penalties

no code implementations28 Feb 2020 Benjamin G. Stokell, Rajen D. Shah, Ryan J. Tibshirani

We provide an algorithm for exact and efficient computation of the global minimum of the resulting nonconvex objective in the case with a single variable with potentially many levels, and use this within a block coordinate descent procedure in the multivariate case.

Clustering Vocal Bursts Intensity Prediction

Predictive inference with the jackknife+

no code implementations8 May 2019 Rina Foygel Barber, Emmanuel J. Candes, Aaditya Ramdas, Ryan J. Tibshirani

This paper introduces the jackknife+, which is a novel method for constructing predictive confidence intervals.

Methodology

Conformal Prediction Under Covariate Shift

1 code implementation NeurIPS 2019 Rina Foygel Barber, Emmanuel J. Candes, Aaditya Ramdas, Ryan J. Tibshirani

We extend conformal prediction methodology beyond the case of exchangeable data.

Methodology

A Higher-Order Kolmogorov-Smirnov Test

no code implementations24 Mar 2019 Veeranjaneyulu Sadhanala, Yu-Xiang Wang, Aaditya Ramdas, Ryan J. Tibshirani

We present an extension of the Kolmogorov-Smirnov (KS) two-sample test, which can be more sensitive to differences in the tails.

Surprises in High-Dimensional Ridgeless Least Squares Interpolation

no code implementations19 Mar 2019 Trevor Hastie, Andrea Montanari, Saharon Rosset, Ryan J. Tibshirani

Interpolators -- estimators that achieve zero training error -- have attracted growing attention in machine learning, mainly because state-of-the art neural networks appear to be models of this type.

Vocal Bursts Intensity Prediction

The limits of distribution-free conditional predictive inference

no code implementations12 Mar 2019 Rina Foygel Barber, Emmanuel J. Candès, Aaditya Ramdas, Ryan J. Tibshirani

We consider the problem of distribution-free predictive inference, with the goal of producing predictive coverage guarantees that hold conditionally rather than marginally.

Statistics Theory Statistics Theory

A Continuous-Time View of Early Stopping for Least Squares

no code implementations23 Oct 2018 Alnur Ali, J. Zico Kolter, Ryan J. Tibshirani

Our primary focus is to compare the risk of gradient flow to that of ridge regression.

regression

A Sharp Error Analysis for the Fused Lasso, with Application to Approximate Changepoint Screening

no code implementations NeurIPS 2017 Kevin Lin, James L. Sharpnack, Alessandro Rinaldo, Ryan J. Tibshirani

In the 1-dimensional multiple changepoint detection problem, we derive a new fast error rate for the fused lasso estimator, under the assumption that the mean vector has a sparse number of changepoints.

Higher-Order Total Variation Classes on Grids: Minimax Theory and Trend Filtering Methods

no code implementations NeurIPS 2017 Veeranjaneyulu Sadhanala, Yu-Xiang Wang, James L. Sharpnack, Ryan J. Tibshirani

To move past this, we define two new higher-order TV classes, based on two ways of compiling the discrete derivatives of a parameter across the nodes.

Extended Comparisons of Best Subset Selection, Forward Stepwise Selection, and the Lasso

1 code implementation27 Jul 2017 Trevor Hastie, Robert Tibshirani, Ryan J. Tibshirani

In exciting new work, Bertsimas et al. (2016) showed that the classical best subset selection problem in regression modeling can be formulated as a mixed integer optimization (MIO) problem.

Methodology Computation

Additive Models with Trend Filtering

no code implementations16 Feb 2017 Veeranjaneyulu Sadhanala, Ryan J. Tibshirani

We study additive models built with trend filtering, i. e., additive models whose components are each regularized by the (discrete) total variation of their $k$th (discrete) derivative, for a chosen integer $k \geq 0$.

Additive models

Exact Post-Selection Inference for Changepoint Detection and Other Generalized Lasso Problems

1 code implementation11 Jun 2016 Sangwon Hyun, Max G'Sell, Ryan J. Tibshirani

Leveraging a sequential characterization of this path from Tibshirani & Taylor (2011), and recent advances in post-selection inference from Lee et al. (2016), Tibshirani et al. (2016), we develop exact hypothesis tests and confidence intervals for linear contrasts of the underlying mean vector, conditioned on any model selection event along the generalized lasso path (assuming Gaussian errors in the observations).

Methodology

Distribution-Free Predictive Inference For Regression

5 code implementations14 Apr 2016 Jing Lei, Max G'Sell, Alessandro Rinaldo, Ryan J. Tibshirani, Larry Wasserman

In the spirit of reproducibility, all of our empirical results can also be easily (re)generated using this package.

Computational Efficiency Prediction Intervals +2

Nonparametric modal regression

no code implementations4 Dec 2014 Yen-Chi Chen, Christopher R. Genovese, Ryan J. Tibshirani, Larry Wasserman

Modal regression estimates the local modes of the distribution of $Y$ given $X=x$, instead of the mean, as in the usual regression sense, and can hence reveal important structure missed by usual regression methods.

regression

Trend Filtering on Graphs

no code implementations28 Oct 2014 Yu-Xiang Wang, James Sharpnack, Alex Smola, Ryan J. Tibshirani

We introduce a family of adaptive estimators on graphs, based on penalizing the $\ell_1$ norm of discrete graph differences.

regression

A General Framework for Fast Stagewise Algorithms

no code implementations25 Aug 2014 Ryan J. Tibshirani

Forward stagewise regression follows a very simple strategy for constructing a sequence of sparse regression estimates: it starts with all coefficients equal to zero, and iteratively updates the coefficient (by a small amount $\epsilon$) of the variable that achieves the maximal absolute inner product with the current residual.

Image Denoising Matrix Completion +1

Fast and Flexible ADMM Algorithms for Trend Filtering

4 code implementations9 Jun 2014 Aaditya Ramdas, Ryan J. Tibshirani

This paper presents a fast and robust algorithm for trend filtering, a recently developed nonparametric regression tool.

The Falling Factorial Basis and Its Statistical Applications

no code implementations3 May 2014 Yu-Xiang Wang, Alex Smola, Ryan J. Tibshirani

We study a novel spline-like basis, which we name the "falling factorial basis", bearing many similarities to the classic truncated power basis.

Exact Post-Selection Inference for Sequential Regression Procedures

1 code implementation16 Jan 2014 Ryan J. Tibshirani, Jonathan Taylor, Richard Lockhart, Robert Tibshirani

We propose new inference tools for forward stepwise regression, least angle regression, and the lasso.

Methodology 62F03, 62G15

Adaptive piecewise polynomial estimation via trend filtering

1 code implementation10 Apr 2013 Ryan J. Tibshirani

We also provide theoretical support for these empirical findings; most notably, we prove that (with the right choice of tuning parameter) the trend filtering estimate converges to the true underlying function at the minimax rate for functions whose $k$th derivative is of bounded variation.

regression

A significance test for the lasso

no code implementations30 Jan 2013 Richard Lockhart, Jonathan Taylor, Ryan J. Tibshirani, Robert Tibshirani

We propose a simple test statistic based on lasso fitted values, called the covariance test statistic, and show that when the true model is linear, this statistic has an $\operatorname {Exp}(1)$ asymptotic distribution under the null hypothesis (the null being that all truly active variables are contained in the current lasso model).

Statistics Theory Methodology Statistics Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.