1 code implementation • 15 Sep 2024 • Parth T. Nobel, Daniel LeJeune, Emmanuel J. Candès
Estimating out-of-sample risk for models trained on large high-dimensional datasets is an expensive but essential part of the machine learning process, enabling practitioners to optimally tune hyperparameters.
no code implementations • 9 May 2024 • Michał Dereziński, Daniel LeJeune, Deanna Needell, Elizaveta Rebrova
While effective in practice, iterative methods for solving large systems of linear equations can be significantly affected by problem-dependent condition number quantities.
1 code implementation • 6 Oct 2023 • Pratik Patil, Daniel LeJeune
We also propose an "ensemble trick" whereby the risk for unsketched ridge regression can be efficiently estimated via GCV using small sketched ridge ensembles.
1 code implementation • 29 Aug 2023 • Daniel LeJeune, Sina AlEMohammad
In order to better understand feature learning in neural networks, we propose a framework for understanding linear models in tangent feature space where the features are allowed to be transformed during training.
no code implementations • 4 Jul 2023 • Sina AlEMohammad, Josue Casco-Rodriguez, Lorenzo Luzi, Ahmed Imtiaz Humayun, Hossein Babaei, Daniel LeJeune, Ali Siahkoohi, Richard G. Baraniuk
Seismic advances in generative AI algorithms for imagery, text, and other data types has led to the temptation to use synthetic data to train next-generation models.
2 code implementations • CVPR 2023 • Vishwanath Saragadam, Daniel LeJeune, Jasper Tan, Guha Balakrishnan, Ashok Veeraraghavan, Richard G. Baraniuk
Implicit neural representations (INRs) have recently advanced numerous vision-related areas.
1 code implementation • 1 Nov 2022 • Lorenzo Luzi, Daniel LeJeune, Ali Siahkoohi, Sina AlEMohammad, Vishwanath Saragadam, Hossein Babaei, Naiming Liu, Zichao Wang, Richard G. Baraniuk
We study the interpolation capabilities of implicit neural representations (INRs) of images.
1 code implementation • 20 Oct 2022 • Daniel LeJeune, Jiayu Liu, Reinhard Heckel
Machine learning systems are often applied to data that is drawn from a different distribution than the training distribution.
no code implementations • 27 May 2022 • Jasper Tan, Daniel LeJeune, Blake Mason, Hamid Javadi, Richard G. Baraniuk
Is overparameterization a privacy liability?
1 code implementation • NeurIPS 2021 • Daniel LeJeune, Hamid Javadi, Richard G. Baraniuk
Among the most successful methods for sparsifying deep (neural) networks are those that adaptively mask the network weights throughout training.
1 code implementation • 15 Mar 2021 • Pavan K. Kota, Daniel LeJeune, Rebekah A. Drezek, Richard G. Baraniuk
Here, we present the first exploration of the MMV problem where signals are independently drawn from a sparse, multivariate Poisson distribution.
no code implementations • 9 Mar 2021 • Yehuda Dar, Daniel LeJeune, Richard G. Baraniuk
We define a transfer learning approach to the target task as a linear regression optimization with a regularization on the distance between the to-be-learned target parameters and the already-learned source parameters.
1 code implementation • 27 Oct 2020 • Sina AlEMohammad, Hossein Babaei, Randall Balestriero, Matt Y. Cheung, Ahmed Imtiaz Humayun, Daniel LeJeune, Naiming Liu, Lorenzo Luzi, Jasper Tan, Zichao Wang, Richard G. Baraniuk
High dimensionality poses many challenges to the use of data, from visualization and interpretation, to prediction and storage for historical preservation.
1 code implementation • 10 Oct 2019 • Daniel LeJeune, Hamid Javadi, Richard G. Baraniuk
Ensemble methods that average over a collection of independent predictors that are each limited to a subsampling of both the examples and features of the training data command a significant presence in machine learning, such as the ever-popular random forest, yet the nature of the subsampling effect, particularly of the features, is not well understood.
no code implementations • 28 May 2019 • Daniel LeJeune, Randall Balestriero, Hamid Javadi, Richard G. Baraniuk
Deep (neural) networks have been applied productively in a wide range of supervised and unsupervised learning tasks.
1 code implementation • 22 May 2019 • Daniel LeJeune, Gautam Dasarathy, Richard G. Baraniuk
The main goal is to efficiently identify a subset of arms in a multi-armed bandit problem whose means are above a specified threshold.
1 code implementation • 25 Feb 2019 • Daniel LeJeune, Richard G. Baraniuk, Reinhard Heckel
Algorithms often carry out equally many computations for "easy" and "hard" problem instances.
1 code implementation • ICML 2018 • Amirali Aghazadeh, Ryan Spring, Daniel LeJeune, Gautam Dasarathy, Anshumali Shrivastava, baraniuk
We demonstrate that MISSION accurately and efficiently performs feature selection on real-world, large-scale datasets with billions of dimensions.
1 code implementation • 12 Jun 2018 • Amirali Aghazadeh, Ryan Spring, Daniel LeJeune, Gautam Dasarathy, Anshumali Shrivastava, Richard G. Baraniuk
We demonstrate that MISSION accurately and efficiently performs feature selection on real-world, large-scale datasets with billions of dimensions.