no code implementations • 3 Mar 2022 • Alex Gu, Songtao Lu, Parikshit Ram, Lily Weng
This paper is the first to propose a generic min-max bilevel multi-objective optimization framework, highlighting applications in representation learning and hyperparameter optimization.
no code implementations • 16 Feb 2022 • Yi Zhou, Parikshit Ram, Theodoros Salonidis, Nathalie Baracaldo, Horst Samulowitz, Heiko Ludwig
We address the relatively unexplored problem of hyper-parameter optimization (HPO) for federated learning (FL-HPO).
no code implementations • 1 Feb 2022 • Michael Feffer, Martin Hirzel, Samuel C. Hoffman, Kiran Kate, Parikshit Ram, Avraham Shinnar
A popular approach to train more stable models is ensemble learning.
no code implementations • 15 Dec 2021 • Yi Zhou, Parikshit Ram, Theodoros Salonidis, Nathalie Baracaldo, Horst Samulowitz, Heiko Ludwig
We address the relatively unexplored problem of hyper-parameter optimization (HPO) for federated learning (FL-HPO).
1 code implementation • 14 Dec 2021 • Parikshit Ram, Kaushik Sinha
The mathematical formalization of a neurological mechanism in the olfactory circuit of a fruit-fly as a locality sensitive hash (Flyhash) and bloom filter (FBF) has been recently proposed and "reprogrammed" for various machine learning tasks such as similarity search, outlier detection and text embeddings.
no code implementations • NeurIPS 2021 • Guillaume Baudart, Martin Hirzel, Kiran Kate, Parikshit Ram, Avraham Shinnar, Jason Tsay
Automated machine learning (AutoML) can make data scientists more productive.
1 code implementation • 15 Sep 2021 • Chen Fan, Parikshit Ram, Sijia Liu
The key enabling technique is to interpret MAML as a bilevel optimization (BLO) problem and leverage the sign-based SGD(signSGD) as a lower-level optimizer of BLO.
1 code implementation • 2 Jul 2021 • Paulito P. Palmes, Akihiro Kishimoto, Radu Marinescu, Parikshit Ram, Elizabeth Daly
The pipeline optimization problem in machine learning requires simultaneous optimization of pipeline structures and parameter adaptation of their elements.
no code implementations • ICML Workshop AutoML 2021 • Parikshit Ram, Alexander G. Gray, Horst Samulowitz
The tradeoffs in the excess risk incurred from data-driven learning of a single model has been studied by decomposing the excess risk into approximation, estimation and optimization errors.
no code implementations • ICML Workshop AutoML 2021 • Akihiro Kishimoto, Djallel Bouneffouf, Radu Marinescu, Parikshit Ram, Ambrish Rawat, Martin Wistuba, Paulito Pedregosa Palmes, Adi Botea
Optimizing a machine learning (ML) pipeline has been an important topic of AI and ML.
1 code implementation • 25 Nov 2020 • Yunfei Teng, Anna Choromanska, Murray Campbell, Songtao Lu, Parikshit Ram, Lior Horesh
We study the principal directions of the trajectory of the optimizer after convergence and show that traveling along a few top principal directions can quickly bring the parameters outside the cone but this is not the case for the remaining directions.
no code implementations • 29 Sep 2020 • Pu Zhao, Sijia Liu, Parikshit Ram, Songtao Lu, Yuguang Yao, Djallel Bouneffouf, Xue Lin
As novel contributions, we show that the use of LFT within MAML (i) offers the capability to tackle few-shot learning tasks by meta-learning across incongruous yet related problems and (ii) can efficiently work with first-order and derivative-free few-shot learning problems.
no code implementations • 19 Aug 2020 • Kaushik Sinha, Parikshit Ram
Inspired by the fruit-fly olfactory circuit, the Fly Bloom Filter [Dasgupta et al., 2018] is able to efficiently summarize the data with a single pass and has been used for novelty detection.
1 code implementation • 4 Jul 2020 • Guillaume Baudart, Martin Hirzel, Kiran Kate, Parikshit Ram, Avraham Shinnar
Automated machine learning makes it easier for data scientists to develop pipelines by searching over possible choices for hyperparameters, algorithms, and even pipeline topologies.
no code implementations • 17 Jun 2020 • Parikshit Ram, Sijia Liu, Deepak Vijaykeerthi, Dakuo Wang, Djallel Bouneffouf, Greg Bramble, Horst Samulowitz, Alexander G. Gray
The CASH problem has been widely studied in the context of automated configurations of machine learning (ML) pipelines and various solvers and toolkits are available.
no code implementations • 22 Oct 2019 • Charu Aggarwal, Djallel Bouneffouf, Horst Samulowitz, Beat Buesser, Thanh Hoang, Udayan Khurana, Sijia Liu, Tejaswini Pedapati, Parikshit Ram, Ambrish Rawat, Martin Wistuba, Alexander Gray
Data science is labor-intensive and human experts are scarce but heavily involved in every aspect of it.
no code implementations • 5 Sep 2019 • Dakuo Wang, Justin D. Weisz, Michael Muller, Parikshit Ram, Werner Geyer, Casey Dugan, Yla Tausczik, Horst Samulowitz, Alexander Gray
The rapid advancement of artificial intelligence (AI) is changing our lives in many ways.
2 code implementations • 24 May 2019 • Martin Hirzel, Kiran Kate, Avraham Shinnar, Subhrajit Roy, Parikshit Ram
Machine-learning automation tools, ranging from humble grid-search to hyperopt, auto-sklearn, and TPOT, help explore large search spaces of possible pipelines.
no code implementations • 1 May 2019 • Sijia Liu, Parikshit Ram, Deepak Vijaykeerthy, Djallel Bouneffouf, Gregory Bramble, Horst Samulowitz, Dakuo Wang, Andrew Conn, Alexander Gray
We study the AutoML problem of automatically configuring machine learning pipelines by jointly selecting algorithms and their appropriate hyper-parameters for all steps in supervised learning pipelines.
no code implementations • 21 Jan 2015 • Ryan R. Curtin, Dongryeol Lee, William B. March, Parikshit Ram
In this paper, we present a problem-independent runtime guarantee for any dual-tree algorithm using the cover tree, separating out the problem-dependent and the problem-independent elements.
no code implementations • NeurIPS 2013 • Parikshit Ram, Alexander Gray
We consider the task of nearest-neighbor search with the class of binary-space-partitioning trees, which includes kd-trees, principal axis trees and random projection trees, and try to rigorously answer the question which tree to use for nearest-neighbor search?''
1 code implementation • 28 Feb 2012 • Parikshit Ram, Alexander G. Gray
Finally we present a new data structure for increasing the efficiency of the dual-tree algorithm.
no code implementations • NeurIPS 2009 • Parikshit Ram, Dongryeol Lee, Hua Ouyang, Alexander G. Gray
The long-standing problem of efficient nearest-neighbor (NN) search has ubiquitous applications ranging from astrophysics to MP3 fingerprinting to bioinformatics to movie recommendations.
no code implementations • NeurIPS 2009 • Parikshit Ram, Dongryeol Lee, William March, Alexander G. Gray
Several key computational bottlenecks in machine learning involve pairwise distance computations, including all-nearest-neighbors (finding the nearest neighbor(s) for each point, e. g. in manifold learning) and kernel summations (e. g. in kernel density estimation or kernel machines).