1 code implementation • 22 Feb 2023 • Junwen Yao, N. Benjamin Erichson, Miles E. Lopes
Three key advantages of this approach are: (1) The error estimates are specific to the problem at hand, avoiding the pessimism of worst-case bounds.
no code implementations • 19 Apr 2021 • Aydin Buluc, Tamara G. Kolda, Stefan M. Wild, Mihai Anitescu, Anthony DeGennaro, John Jakeman, Chandrika Kamath, Ramakrishnan Kannan, Miles E. Lopes, Per-Gunnar Martinsson, Kary Myers, Jelani Nelson, Juan M. Restrepo, C. Seshadhri, Draguna Vrabie, Brendt Wohlberg, Stephen J. Wright, Chao Yang, Peter Zwart
Randomized algorithms have propelled advances in artificial intelligence and represent a foundational research area in advancing AI for Science.
no code implementations • 10 Mar 2020 • Miles E. Lopes, N. Benjamin Erichson, Michael W. Mahoney
In order to compute fast approximations to the singular value decompositions (SVD) of very large matrices, randomized sketching algorithms have become a leading approach.
no code implementations • 4 Aug 2019 • Miles E. Lopes, Suofei Wu, Thomas C. M. Lee
When randomized ensemble methods such as bagging and random forests are implemented, a basic question arises: Is the ensemble large enough?
no code implementations • 20 Jul 2019 • Miles E. Lopes
Due to the fact that bagging and random forests are randomized algorithms, the choice of ensemble size is closely related to the notion of "algorithmic variance" (i. e. the variance of prediction error due only to the training algorithm).
no code implementations • ICML 2018 • Miles E. Lopes, Shusen Wang, Michael W. Mahoney
As a more practical alternative, we propose a bootstrap method to compute a posteriori error estimates for randomized LS algorithms.
no code implementations • 6 Aug 2017 • Miles E. Lopes, Shusen Wang, Michael W. Mahoney
In recent years, randomized methods for numerical linear algebra have received growing interest as a general approach to large-scale problems.
no code implementations • NeurIPS 2014 • Miles E. Lopes
We study the residual bootstrap (RB) method in the context of high-dimensional linear regression.
no code implementations • 25 Jul 2015 • Miles E. Lopes
This family interpolates between $\|x\|_0=s_0(x)$ and $\|x\|_1^2/\|x\|_2^2=s_2(x)$ as $q$ ranges over $[0, 2]$.
no code implementations • 4 Mar 2013 • Miles E. Lopes
In the standard case when classifiers are aggregated by majority vote, the present work offers a way to quantify this convergence in terms of "algorithmic variance," i. e. the variance of prediction error due only to the randomized training algorithm.
no code implementations • NeurIPS 2011 • Miles E. Lopes, Laurent J. Jacob, Martin J. Wainwright
We consider the hypothesis testing problem of detecting a shift between the means of two multivariate normal distributions in the high-dimensional setting, allowing for the data dimension p to exceed the sample size n. Specifically, we propose a new test statistic for the two-sample test of means that integrates a random projection with the classical Hotelling T^2 statistic.