1 code implementation • 4 Sep 2024 • Sepanta Zeighami, Zac Wellmer, Aditya Parameswaran
Existing approaches either fine-tune the pre-trained model itself or, more efficiently, but at the cost of accuracy, train adaptor models to transform the output of the pre-trained model.
no code implementations • 30 Mar 2021 • Doris Xin, Hui Miao, Aditya Parameswaran, Neoklis Polyzotis
Machine learning (ML) is now commonplace, powering data-driven applications in various organizations.
no code implementations • 13 Jan 2021 • Doris Xin, Eva Yiwei Wu, Doris Jung-Lin Lee, Niloufar Salehi, Aditya Parameswaran
Efforts to make machine learning more widely accessible have led to a rapid increase in Auto-ML tools that aim to automate the process of training and deploying machine learning.
no code implementations • 4 May 2020 • Angela Lee, Doris Xin, Doris Lee, Aditya Parameswaran
It is well-known that the process of developing machine learning (ML) workflows is a dark-art; even experts struggle to find an optimal workflow leading to a high accuracy model.
no code implementations • 14 Dec 2018 • Doris Xin, Stephen Macke, Litian Ma, Jialin Liu, Shuchen Song, Aditya Parameswaran
Machine learning workflow development is a process of trial-and-error: developers iterate on workflows by testing out small modifications until the desired accuracy is achieved.
no code implementations • 3 Aug 2018 • Doris Xin, Litian Ma, Jialin Liu, Stephen Macke, Shuchen Song, Aditya Parameswaran
Data application developers and data scientists spend an inordinate amount of time iterating on machine learning (ML) workflows -- by modifying the data pre-processing, model training, and post-processing steps -- via trial-and-error to achieve the desired model performance.
no code implementations • 27 Mar 2018 • Doris Xin, Litian Ma, Shuchen Song, Aditya Parameswaran
A quantitative characterization of iteration can serve as a benchmark for machine learning workflow development in practice, and can aid the development of human-in-the-loop machine learning systems.
no code implementations • ICLR 2019 • Yihan Gao, Chao Zhang, Jian Peng, Aditya Parameswaran
Both theoretical and empirical evidence are provided to support this argument: (a) we prove that the generalization error of these methods can be bounded by limiting the norm of vectors, regardless of the embedding dimension; (b) we show that the generalization performance of linear graph embedding methods is correlated with the norm of embedding vectors, which is small due to the early stopping of SGD and the vanishing gradients.
no code implementations • 9 Jun 2015 • Yihan Gao, Aditya Parameswaran, Jian Peng
We study the interpretability of conditional probability estimates for binary classification under the agnostic setting or scenario.
no code implementations • 15 Aug 2014 • Leilani Battle, Edward Benson, Aditya Parameswaran, Eugene Wu
We develop algorithms and indexes to support cost-sensitive prediction, i. e., making decisions using machine learning models taking feature evaluation cost into account.
no code implementations • NeurIPS 2012 • Nilesh Dalvi, Aditya Parameswaran, Vibhor Rastogi
In this paper, we consider the problem of asking the optimal set of queries to minimize the resulting output uncertainty.