Search Results for author: Robert Tibshirani

Found 27 papers, 13 papers with code

Semi-supervised Cooperative Learning for Multiomics Data Fusion

no code implementations2 Aug 2023 Daisy Yi Ding, Xiaotao Shen, Michael Snyder, Robert Tibshirani

Multiomics data fusion integrates diverse data modalities, ranging from transcriptomics to proteomics, to gain a comprehensive understanding of biological systems and enhance predictions on outcomes of interest related to disease phenotypes and treatment responses.

FastCPH: Efficient Survival Analysis for Neural Networks

2 code implementations21 Aug 2022 Xuelin Yang, Louis Abraham, Sejin Kim, Petr Smirnov, Feng Ruan, Benjamin Haibe-Kains, Robert Tibshirani

The Cox proportional hazards model is a canonical method in survival analysis for prediction of the life expectancy of a patient given clinical or genetic covariates -- it is a linear model in its original form.

Survival Analysis

Confidence intervals for the Cox model test error from cross-validation

1 code implementation26 Jan 2022 Min Woo Sun, Robert Tibshirani

Cross-validation (CV) is one of the most widely used techniques in statistical learning for estimating the test error of a model, but its behavior is not yet fully understood.

Cooperative learning for multiview analysis

no code implementations23 Dec 2021 Daisy Yi Ding, Shuangning Li, Balasubramanian Narasimhan, Robert Tibshirani

Leveraging aligned signals and allowing flexible fitting mechanisms for different modalities, cooperative learning offers a powerful approach to multiomics data fusion.

Cross-validation: what does it estimate and how well does it do it?

2 code implementations1 Apr 2021 Stephen Bates, Trevor Hastie, Robert Tibshirani

Cross-validation is a widely-used technique to estimate prediction error, but its behavior is complex and not fully understood.

Feature-weighted elastic net: using "features of features" for better prediction

1 code implementation2 Jun 2020 J. Kenneth Tay, Nima Aghaeepour, Trevor Hastie, Robert Tibshirani

In some supervised learning settings, the practitioner might have additional information on the features used for prediction.

feature selection Multi-Task Learning

Reluctant generalized additive modeling

no code implementations4 Dec 2019 J. Kenneth Tay, Robert Tibshirani

Sparse generalized additive models (GAMs) are an extension of sparse generalized linear models which allow a model's prediction to vary non-linearly with an input variable.

Additive models

LassoNet: A Neural Network with Feature Sparsity

2 code implementations29 Jul 2019 Ismael Lemhadri, Feng Ruan, Louis Abraham, Robert Tibshirani

Unlike other approaches to feature selection for neural nets, our method uses a modified objective function with constraints, and so integrates feature selection with the parameter learning directly.

feature selection regression

Spectral Overlap and a Comparison of Parameter-Free, Dimensionality Reduction Quality Metrics

1 code implementation3 Jul 2019 Jonathan Johannemann, Robert Tibshirani

Nonlinear dimensionality reduction methods are a popular tool for data scientists and researchers to visualize complex, high dimensional data.

Dimensionality Reduction Hyperparameter Optimization

Principal component-guided sparse regression

no code implementations10 Oct 2018 J. Kenneth Tay, Jerome Friedman, Robert Tibshirani

We propose a new method for supervised learning, especially suited to wide data where the number of features is much greater than the number of observations.


A comparison of methods for model selection when estimating individual treatment effects

3 code implementations14 Apr 2018 Alejandro Schuler, Michael Baiocchi, Robert Tibshirani, Nigam Shah

Instead of relying on a single method, multiple models fit by a diverse set of algorithms should be evaluated against each other using an objective function learned from the validation set.

Model Selection

A Pliable Lasso

1 code implementation1 Dec 2017 Robert Tibshirani, Jerome Friedman

We propose a generalization of the lasso that allows the model coefficients to vary as a function of a general set of modifying variables.


Synth-Validation: Selecting the Best Causal Inference Method for a Given Dataset

no code implementations31 Oct 2017 Alejandro Schuler, Ken Jung, Robert Tibshirani, Trevor Hastie, Nigam Shah

Using simulations, we show that using synth-validation to select a causal inference method for each study lowers the expected estimation error relative to consistently using any single method.

Causal Inference

Extended Comparisons of Best Subset Selection, Forward Stepwise Selection, and the Lasso

1 code implementation27 Jul 2017 Trevor Hastie, Robert Tibshirani, Ryan J. Tibshirani

In exciting new work, Bertsimas et al. (2016) showed that the classical best subset selection problem in regression modeling can be formulated as a mixed integer optimization (MIO) problem.

Methodology Computation

Some methods for heterogeneous treatment effect estimation in high-dimensions

1 code implementation1 Jul 2017 Scott Powers, Junyang Qian, Kenneth Jung, Alejandro Schuler, Nigam H. Shah, Trevor Hastie, Robert Tibshirani

When devising a course of treatment for a patient, doctors often have little quantitative evidence on which to base their decisions, beyond their medical education and published clinical trials.

Vocal Bursts Intensity Prediction

Nuclear penalized multinomial regression with an application to predicting at bat outcomes in baseball

1 code implementation30 Jun 2017 Scott Powers, Trevor Hastie, Robert Tibshirani

We propose the nuclear norm penalty as an alternative to the ridge penalty for regularized multinomial regression.


Sparse canonical correlation analysis

no code implementations30 May 2017 Xiaotong Suo, Victor Minden, Bradley Nelson, Robert Tibshirani, Michael Saunders

Canonical correlation analysis was proposed by Hotelling [6] and it measures linear relationship between two multidimensional variables.

High-dimensional regression adjustments in randomized experiments

no code implementations22 Jul 2016 Stefan Wager, Wenfei Du, Jonathan Taylor, Robert Tibshirani

We study the problem of treatment effect estimation in randomized experiments with high-dimensional covariate information, and show that essentially any risk-consistent regression adjustment can be used to obtain efficient estimates of the average treatment effect.

regression valid +1

Selective Sequential Model Selection

no code implementations8 Dec 2015 William Fithian, Jonathan Taylor, Robert Tibshirani, Ryan Tibshirani

Extending the selected-model tests of Fithian et al. (2014), we construct p-values for each step in the path which account for the adaptive selection of the model path using the data.

Model Selection regression

An Ordered Lasso and Sparse Time-Lagged Regression

no code implementations26 May 2014 Xiaotong Suo, Robert Tibshirani

We consider regression scenarios where it is natural to impose an order constraint on the coefficients.

regression Time Series +1

Collaborative Regression

no code implementations22 Jan 2014 Samuel M. Gross, Robert Tibshirani

We propose a method for performing sparse supervised canonical correlation analysis (sparse sCCA), a specific case of sparse mCCA when one of the datasets is a vector.


Exact Post-Selection Inference for Sequential Regression Procedures

1 code implementation16 Jan 2014 Ryan J. Tibshirani, Jonathan Taylor, Richard Lockhart, Robert Tibshirani

We propose new inference tools for forward stepwise regression, least angle regression, and the lasso.

Methodology 62F03, 62G15

A Component Lasso

no code implementations18 Nov 2013 Nadine Hussami, Robert Tibshirani

We propose a new sparse regression method called the component lasso, based on a simple idea.


A significance test for the lasso

no code implementations30 Jan 2013 Richard Lockhart, Jonathan Taylor, Ryan J. Tibshirani, Robert Tibshirani

We propose a simple test statistic based on lasso fitted values, called the covariance test statistic, and show that when the true model is linear, this statistic has an $\operatorname {Exp}(1)$ asymptotic distribution under the null hypothesis (the null being that all truly active variables are contained in the current lasso model).

Statistics Theory Methodology Statistics Theory

A lasso for hierarchical interactions

no code implementations22 May 2012 Jacob Bien, Jonathan Taylor, Robert Tibshirani

We add a set of convex constraints to the lasso to produce sparse interaction models that honor the hierarchy restriction that an interaction only be included in a model if one or both variables are marginally important.

Cannot find the paper you are looking for? You can Submit a new open access paper.