no code implementations • 26 Sep 2013 • Ravi Ganti, Alexander G. Gray
The design of this sampling distribution is also inspired by the analogy between active learning and multi-armed bandits.
no code implementations • NeurIPS 2012 • Nishant Mehta, Dongryeol Lee, Alexander G. Gray
We show theoretically that minimax MTL tends to avoid worst case outcomes on newly drawn test tasks in the learning to learn (LTL) test setting.
no code implementations • NeurIPS 2009 • Arkadas Ozakin, Alexander G. Gray
Kernel density estimation is the most widely-used practical method for accurate nonparametric density estimation.
no code implementations • NeurIPS 2009 • Parikshit Ram, Dongryeol Lee, William March, Alexander G. Gray
Several key computational bottlenecks in machine learning involve pairwise distance computations, including all-nearest-neighbors (finding the nearest neighbor(s) for each point, e. g. in manifold learning) and kernel summations (e. g. in kernel density estimation or kernel machines).
no code implementations • NeurIPS 2009 • Parikshit Ram, Dongryeol Lee, Hua Ouyang, Alexander G. Gray
The long-standing problem of efficient nearest-neighbor (NN) search has ubiquitous applications ranging from astrophysics to MP3 fingerprinting to bioinformatics to movie recommendations.
no code implementations • NeurIPS 2008 • Michael P. Holmes, Jr. Isbell, Charles Lee, Alexander G. Gray
The Singular Value Decomposition is a key operation in many machine learning methods.
no code implementations • NeurIPS 2008 • Dongryeol Lee, Alexander G. Gray
We propose a new fast Gaussian summation algorithm for high-dimensional datasets with high accuracy.
no code implementations • 17 Jun 2020 • Parikshit Ram, Sijia Liu, Deepak Vijaykeerthi, Dakuo Wang, Djallel Bouneffouf, Greg Bramble, Horst Samulowitz, Alexander G. Gray
The CASH problem has been widely studied in the context of automated configurations of machine learning (ML) pipelines and various solvers and toolkits are available.
no code implementations • ICML Workshop AutoML 2021 • Parikshit Ram, Alexander G. Gray, Horst Samulowitz
The tradeoffs in the excess risk incurred from data-driven learning of a single model has been studied by decomposing the excess risk into approximation, estimation and optimization errors.
no code implementations • 12 Jan 2023 • Parikshit Ram, Alexander G. Gray, Horst C. Samulowitz, Gregory Bramble
We show, to our knowledge, the first theoretical treatments of two common questions in cross-validation based hyperparameter selection: (1) After selecting the best hyperparameter using a held-out set, we train the final model using {\em all} of the training data -- since this may or may not improve future generalization error, should one do this?
no code implementations • 2 May 2024 • Parikshit Ram, Tim Klinger, Alexander G. Gray
We then show how various existing general and special purpose sequence processing models (such as recurrent, convolution and attention-based ones) fit this definition and use it to analyze their compositional complexity.
1 code implementation • 28 Feb 2012 • Parikshit Ram, Alexander G. Gray
Finally we present a new data structure for increasing the efficiency of the dual-tree algorithm.
1 code implementation • 20 Jun 2012 • Michael P. Holmes, Alexander G. Gray, Charles Lee Isbell
Conditional density estimation generalizes regression by modeling a full density f(yjx) rather than only the expected value E(yjx).