Search Results for author: Rishit Sheth

Found 8 papers, 2 papers with code

Bayesian Optimization Over Iterative Learners with Structured Responses: A Budget-aware Planning Approach

no code implementations25 Jun 2022 Syrine Belakaria, Janardhan Rao Doppa, Nicolo Fusi, Rishit Sheth

The rising growth of deep neural networks (DNNs) and datasets in size motivates the need for efficient solutions for simultaneous model selection and training.

Bayesian Optimization Hyperparameter Optimization +1

Direct loss minimization algorithms for sparse Gaussian processes

1 code implementation7 Apr 2020 Yadi Wei, Rishit Sheth, Roni Khardon

The application of DLM in non-conjugate cases is more complex because the logarithm of expectation in the log-loss DLM objective is often intractable and simple sampling leads to biased estimates of gradients.

Computational Efficiency Gaussian Processes +3

Weighted Meta-Learning

no code implementations20 Mar 2020 Diana Cai, Rishit Sheth, Lester Mackey, Nicolo Fusi

Meta-learning leverages related source tasks to learn an initialization that can be quickly fine-tuned to a target task with limited labeled examples.

Meta-Learning

Feature Gradients: Scalable Feature Selection via Discrete Relaxation

no code implementations27 Aug 2019 Rishit Sheth, Nicolo Fusi

In this paper we introduce Feature Gradients, a gradient-based search algorithm for feature selection.

feature selection

Excess Risk Bounds for the Bayes Risk using Variational Inference in Latent Gaussian Models

no code implementations NeurIPS 2017 Rishit Sheth, Roni Khardon

The paper furthers such analysis by providing bounds on the excess risk of variational inference algorithms and related regularized loss minimization algorithms for a large class of latent variable models with Gaussian latent variables.

Gaussian Processes Topic Models +1

Probabilistic Matrix Factorization for Automated Machine Learning

1 code implementation NeurIPS 2018 Nicolo Fusi, Rishit Sheth, Huseyn Melih Elibol

Automating the selection and tuning of machine learning pipelines consisting of data pre-processing methods and machine learning models, has long been one of the goals of the machine learning community.

Bayesian Optimization BIG-bench Machine Learning +3

Monte Carlo Structured SVI for Two-Level Non-Conjugate Models

no code implementations12 Dec 2016 Rishit Sheth, Roni Khardon

The stochastic variational inference (SVI) paradigm, which combines variational inference, natural gradients, and stochastic updates, was recently proposed for large-scale data analysis in conjugate Bayesian models and demonstrated to be effective in several problems.

Gaussian Processes Topic Models +2

Cannot find the paper you are looking for? You can Submit a new open access paper.