no code implementations • 3 Nov 2023 • Sanjeeb Dash, Soumyadip Ghosh, Joao Goncalves, Mark S. Squillante
Model explainability is crucial for human users to be able to interpret how a proposed classifier assigns labels to data based on its feature values.
no code implementations • 20 Oct 2022 • Soumyadip Ghosh, Yingdong Lu, Tomasz Nowicki, Edith Zhang
We present a framework to analyze MFVI algorithms, which is inspired by a similar development for general variational Bayesian formulations.
no code implementations • 23 Feb 2022 • Xuhui Zhang, Jose Blanchet, Soumyadip Ghosh, Mark S. Squillante
In contrast, our study first illustrates the benefits of incorporating a natural geometric structure within a linear regression model, which corresponds to the generalized eigenvalue problem formed by the Gram matrices of both domains.
no code implementations • 4 Feb 2022 • Soumyadip Ghosh, Yingdong Lu, Tomasz J. Nowicki
We study the convergence of a random iterative sequence of a family of operators on infinite dimensional Hilbert spaces, inspired by the Stochastic Gradient Descent (SGD) algorithm in the case of the noiseless regression, as studied in [1].
no code implementations • NeurIPS 2021 • Soumyadip Ghosh, Mark Squillante, Ebisa Wollega
Distributionally robust learning (DRL) is increasingly seen as a viable method to train machine learning models for improved model generalization.
no code implementations • 21 Oct 2021 • Soumyadip Ghosh, Yingdong Lu, Tomasz Nowicki
Existing rigorous convergence guarantees for the Hamiltonian Monte Carlo (HMC) algorithm use Gaussian auxiliary momentum variables, which are crucially symmetrically distributed.
2 code implementations • 12 Mar 2021 • Soumyadip Ghosh, Bernardo Aquino, Vijay Gupta
To relieve some of this overhead, in this paper, we present EventGraD - an algorithm with event-triggered communication for stochastic gradient descent in parallel machine learning.
no code implementations • 4 Feb 2021 • Soumyadip Ghosh, Yingdong Lu, Tomasz Nowicki
The main purpose of this paper is to facilitate the communication between the Analytic, Probabilistic and Algorithmic communities.
no code implementations • 21 Jan 2021 • Soumyadip Ghosh, Yingdong Lu, Tomasz Nowicki
We establish $L_q$ convergence for Hamiltonian Monte Carlo algorithms.
no code implementations • 22 Dec 2020 • Soumyadip Ghosh, Mark Squillante
Seeking to improve model generalization, we consider a new approach based on distributionally robust learning (DRL) that applies stochastic gradient descent to the outer minimization problem.
no code implementations • NeurIPS 2020 • Nian Si, Jose Blanchet, Soumyadip Ghosh, Mark Squillante
We consider the problem of estimating the Wasserstein distance between the empirical measure and a set of probability measures whose expectations over a class of functions (hypothesis class) are constrained.
no code implementations • 22 May 2018 • Soumyadip Ghosh, Mark Squillante, Ebisa Wollega
Distributionally robust optimization (DRO) problems are increasingly seen as a viable method to train machine learning models for improved model generalization.
no code implementations • 3 Mar 2018 • Sanghamitra Dutta, Gauri Joshi, Soumyadip Ghosh, Parijat Dube, Priya Nagpurkar
Distributed Stochastic Gradient Descent (SGD) when run in a synchronous manner, suffers from delays in waiting for the slowest learners (stragglers).
no code implementations • 5 Jul 2016 • Kalyani Nagaraj, Jie Xu, Raghu Pasupathy, Soumyadip Ghosh
The first of our proposed estimators $\estOpt$ is the "full-information" estimator that actively exploits such local structure to achieve bounded relative error in Gaussian settings.