no code implementations • 12 Apr 2024 • Omar Hagrass, Bharath Sriperumbudur, Krishnakumar Balasubramanian

We explore the minimax optimality of goodness-of-fit tests on general domains using the kernelized Stein discrepancy (KSD).

1 code implementation • 27 Mar 2024 • Yanhao Jin, Krishnakumar Balasubramanian, Debashis Paul

Finally, we propose and analyze an estimator of the inverse covariance matrix of random regression coefficients based on data from the training tasks.

no code implementations • 15 Mar 2024 • Zhaoyang Shi, Chinmoy Bhattacharjee, Krishnakumar Balasubramanian, Wolfgang Polonik

We derive Gaussian approximation bounds for random forest predictions based on a set of training points given by a Poisson process, under fairly mild regularity assumptions on the data generating process.

no code implementations • 22 Feb 2024 • Zhaoyang Shi, Krishnakumar Balasubramanian, Wolfgang Polonik

More specifically, our approach is using the fractional Laplacian and is designed to handle the case when the true regression function lies in an $L_2$-fractional Sobolev space with order $s\in (0, 1)$.

no code implementations • 31 Oct 2023 • Zhaoyang Shi, Krishnakumar Balasubramanian, Wolfgang Polonik

We show both adaptive and non-adaptive minimax rates of convergence for a family of weighted Laplacian-Eigenmap based nonparametric regression methods, when the true regression function belongs to a Sobolev space and the sampling density is bounded from above and below.

no code implementations • 2 Oct 2023 • Xuxing Chen, Krishnakumar Balasubramanian, Promit Ghosal, Bhavya Agrawalla

We conduct a comprehensive investigation into the dynamics of gradient descent using large-order constant step-sizes in the context of quadratic regression models.

no code implementations • 25 Sep 2023 • Jiaxiang Li, Krishnakumar Balasubramanian, Shiqian Ma

We present Zeroth-order Riemannian Averaging Stochastic Approximation (\texttt{Zo-RASA}) algorithms for stochastic optimization on Riemannian manifolds.

no code implementations • 3 Aug 2023 • Abhishek Roy, Krishnakumar Balasubramanian

We investigate the online overlapping batch-means covariance estimator for Stochastic Gradient Descent (SGD) under Markovian sampling.

no code implementations • 11 Jul 2023 • Xuxing Chen, Krishnakumar Balasubramanian, Saeed Ghadimi

We develop and analyze stochastic approximation algorithms for solving nested compositional bi-level optimization problems.

no code implementations • 28 Jun 2023 • Krishnakumar Balasubramanian, Larry Goldstein, Nathan Ross, Adil Salim

Specializing our general result, we obtain the first bounds on the Gaussian random field approximation of wide random neural networks of any depth and Lipschitz activation functions at the random field level.

no code implementations • 21 Jun 2023 • Xuxing Chen, Tesi Xiao, Krishnakumar Balasubramanian

In this paper, we introduce a novel fully single-loop and Hessian-inversion-free algorithmic framework for stochastic bilevel optimization and present a tighter analysis under standard smoothness assumptions (first-order Lipschitzness of the UL function and second-order Lipschitzness of the LL function).

no code implementations • 10 Apr 2023 • Michael Diao, Krishnakumar Balasubramanian, Sinho Chewi, Adil Salim

Of key interest in statistics and machine learning is Gaussian VI, which approximates $\pi$ by minimizing the Kullback-Leibler (KL) divergence to $\pi$ over the space of Gaussians.

no code implementations • 3 Apr 2023 • Krishnakumar Balasubramanian, Promit Ghosal, Ye He

We derive high-dimensional scaling limits and fluctuations for the online least-squares Stochastic Gradient Descent (SGD) algorithm by taking the properties of the data generating model explicitly into consideration.

no code implementations • 7 Mar 2023 • Alireza Mousavi-Hosseini, Tyler Farghly, Ye He, Krishnakumar Balasubramanian, Murat A. Erdogdu

We do so by establishing upper and lower bounds for Langevin diffusions and LMC under weak Poincar\'e inequalities that are satisfied by a large class of densities including polynomially-decaying heavy-tailed densities (i. e., Cauchy-type).

no code implementations • 1 Mar 2023 • Ye He, Tyler Farghly, Krishnakumar Balasubramanian, Murat A. Erdogdu

We analyze the complexity of sampling from a class of heavy-tailed distributions by discretizing a natural class of It\^o diffusions associated with weighted Poincar\'e inequalities.

no code implementations • 20 Feb 2023 • Bhavya Agrawalla, Krishnakumar Balasubramanian, Promit Ghosal

In order to use the developed result in practice, we further develop an online approach for estimating the expectation and the variance terms appearing in the CLT, and establish high-probability bounds for the developed online estimator.

1 code implementation • 20 Feb 2023 • Tesi Xiao, Xuxing Chen, Krishnakumar Balasubramanian, Saeed Ghadimi

We focus on decentralized stochastic non-convex optimization, where $n$ agents work together to optimize a composite objective function which is a sum of a smooth term and a non-smooth convex term.

no code implementations • 16 Feb 2023 • Matthew Zhang, Sinho Chewi, Mufan Bill Li, Krishnakumar Balasubramanian, Murat A. Erdogdu

As a byproduct, we also obtain the first KL divergence guarantees for ULMC without Hessian smoothness under strong log-concavity, which is based on a new result on the log-Sobolev constant along the underdamped Langevin diffusion.

no code implementations • 15 Nov 2022 • Ye He, Krishnakumar Balasubramanian, Bharath K. Sriperumbudur, Jianfeng Lu

However, a mean-field analysis reveals that the gradient flow corresponding to the SVGD algorithm (i. e., the Stein Variational Gradient Flow) only provides a constant-order approximation to the Wasserstein Gradient Flow corresponding to the KL-divergence minimization.

no code implementations • 23 Oct 2022 • Xuxing Chen, Minhui Huang, Shiqian Ma, Krishnakumar Balasubramanian

Bilevel optimization recently has received tremendous attention due to its great success in solving important machine learning problems like meta learning, reinforcement learning, and hyperparameter optimization.

no code implementations • 19 Oct 2022 • Zhaoyang Shi, Krishnakumar Balasubramanian, Wolfgang Polonik

We derive normal approximation results for a class of stabilizing functionals of binomial or Poisson point process, that are not necessarily expressible as sums of certain score functions.

no code implementations • 22 Jun 2022 • Abhishek Roy, Krishnakumar Balasubramanian, Saeed Ghadimi

We study stochastic optimization algorithms for constrained nonconvex stochastic optimization problems with Markovian data.

no code implementations • 23 Feb 2022 • Nuri Mert Vural, Lu Yu, Krishnakumar Balasubramanian, Stanislav Volgushev, Murat A. Erdogdu

We study stochastic convex optimization under infinite noise variance.

no code implementations • 10 Feb 2022 • Krishnakumar Balasubramanian, Sinho Chewi, Murat A. Erdogdu, Adil Salim, Matthew Zhang

For the task of sampling from a density $\pi \propto \exp(-V)$ on $\mathbb{R}^d$, where $V$ is possibly non-convex but $L$-gradient Lipschitz, we prove that averaged Langevin Monte Carlo outputs a sample with $\varepsilon$-relative Fisher information after $O( L^2 d^2/\varepsilon^2)$ iterations.

no code implementations • 9 Feb 2022 • Tesi Xiao, Krishnakumar Balasubramanian, Saeed Ghadimi

We propose a projection-free conditional gradient-type algorithm for smooth stochastic multi-level composition optimization, where the objective function is a nested composition of $T$ functions and the constraint set is a closed convex set.

no code implementations • 20 Jan 2022 • Ye He, Krishnakumar Balasubramanian, Murat A. Erdogdu

We analyze the oracle complexity of sampling from polynomially decaying heavy-tailed target densities based on running the Unadjusted Langevin Algorithm on certain transformed versions of the target density.

no code implementations • 26 Oct 2021 • Olympio Hacquard, Krishnakumar Balasubramanian, Gilles Blanchard, Clément Levrard, Wolfgang Polonik

We study a regression problem on a compact manifold M. In order to take advantage of the underlying geometry and topology of the data, the regression task is performed on the basis of the first several eigenfunctions of the Laplace-Beltrami operator of the manifold, that are regularized with topological penalties.

no code implementations • NeurIPS 2021 • Abhishek Roy, Krishnakumar Balasubramanian, Murat A. Erdogdu

In this work, we establish risk bounds for the Empirical Risk Minimization (ERM) with both dependent and heavy-tailed data-generating processes.

no code implementations • 18 May 2021 • Krishnakumar Balasubramanian

We study statistical and algorithmic aspects of using hypergraphons, that are limits of large hypergraphs, for modeling higher-order interactions.

no code implementations • 10 Feb 2021 • Yanhao Jin, Tesi Xiao, Krishnakumar Balasubramanian

Statistical machine learning models trained with stochastic gradient algorithms are increasingly being deployed in critical scientific applications.

no code implementations • NeurIPS 2020 • Abhishek Roy, Krishnakumar Balasubramanian, Saeed Ghadimi, Prasant Mohapatra

We next analyze Stochastic Cubic-Regularized Newton (SCRN) algorithm under interpolation-like conditions, and show that the oracle complexity to reach an $\epsilon$-local-minimizer under interpolation-like conditions, is $O(1/\epsilon^{2. 5})$.

no code implementations • NeurIPS 2020 • Ye He, Krishnakumar Balasubramanian, Murat A. Erdogdu

The randomized midpoint method, proposed by [SL19], has emerged as an optimal discretization procedure for simulating the continuous time Langevin diffusions.

no code implementations • 28 Sep 2020 • Abhishek Roy, Krishnakumar Balasubramanian, Saeed Ghadimi, Prasant Mohapatra

We next analyze Stochastic Cubic-Regularized Newton (SCRN) algorithm under interpolation-like conditions, and show that the oracle complexity to reach an $\epsilon$-local-minimizer under interpolation-like conditions, is $\tilde{\mathcal{O}}(1/\epsilon^{2. 5})$.

no code implementations • 24 Aug 2020 • Krishnakumar Balasubramanian, Saeed Ghadimi, Anthony Nguyen

We show that the first algorithm, which is a generalization of \cite{GhaRuswan20} to the $T$ level case, can achieve a sample complexity of $\mathcal{O}(1/\epsilon^6)$ by using mini-batches of samples in each iteration.

no code implementations • ICML 2020 • Subhroshekhar Ghosh, Krishnakumar Balasubramanian, Xiaochuan Yang

We propose a novel stochastic network model, called Fractal Gaussian Network (FGN), that embodies well-defined and analytically tractable fractal structures.

no code implementations • 15 Jun 2020 • Tesi Xiao, Krishnakumar Balasubramanian, Saeed Ghadimi

We analyze stochastic conditional gradient methods for constrained optimization problems arising in over-parametrized machine learning.

no code implementations • NeurIPS 2021 • Lu Yu, Krishnakumar Balasubramanian, Stanislav Volgushev, Murat A. Erdogdu

Structured non-convex learning problems, for which critical points have favorable statistical properties, arise frequently in statistical machine learning.

no code implementations • 25 Mar 2020 • Jiaxiang Li, Krishnakumar Balasubramanian, Shiqian Ma

We consider stochastic zeroth-order optimization over Riemannian submanifolds embedded in Euclidean space, where the task is to solve Riemannian optimization problem with only noisy objective function evaluations.

no code implementations • 22 Jan 2020 • Zhongruo Wang, Krishnakumar Balasubramanian, Shiqian Ma, Meisam Razaviyayn

We establish that under the SGC assumption, the complexities of the stochastic algorithms match that of deterministic algorithms.

no code implementations • 3 Dec 2019 • Abhishek Roy, Yifang Chen, Krishnakumar Balasubramanian, Prasant Mohapatra

We establish sub-linear regret bounds on the proposed notions of regret in both the online and bandit setting.

no code implementations • 31 Jul 2019 • Abhishek Roy, Krishnakumar Balasubramanian, Saeed Ghadimi, Prasant Mohapatra

In this paper, motivated by online reinforcement learning problems, we propose and analyze bandit algorithms for both general and structured nonconvex problems with nonstationary (or dynamic) regret as the performance measure, in both stochastic and non-stochastic settings.

no code implementations • 3 Apr 2019 • Andreas Anastasiou, Krishnakumar Balasubramanian, Murat A. Erdogdu

A crucial intermediate step is proving a non-asymptotic martingale central limit theorem (CLT), i. e., establishing the rates of convergence of a multivariate martingale difference sequence to a normal random vector, which might be of independent interest.

no code implementations • 4 Feb 2019 • Abhishek Roy, Lingqing Shen, Krishnakumar Balasubramanian, Saeed Ghadimi

Our theoretical contributions extend the practical applicability of sampling algorithms to the noisy black-box and high-dimensional settings.

no code implementations • NeurIPS 2018 • Krishnakumar Balasubramanian, Saeed Ghadimi

In this paper, we propose and analyze zeroth-order stochastic approximation algorithms for nonconvex and convex optimization.

no code implementations • NeurIPS 2018 • Krishnakumar Balasubramanian, Saeed Ghadimi

In this paper, we propose and analyze zeroth-order stochastic approximation algorithms for nonconvex and convex optimization, with a focus on addressing constrained optimization, high-dimensional setting and saddle-point avoiding.

no code implementations • 17 Jul 2018 • Krishnakumar Balasubramanian, Jianqing Fan, Zhuoran Yang

Motivated by the sampling problems and heterogeneity issues common in high- dimensional big datasets, we consider a class of discordant additive index models.

no code implementations • NeurIPS 2017 • Zhuoran Yang, Krishnakumar Balasubramanian, Princeton Zhaoran Wang, Han Liu

We consider estimating the parametric components of semiparametric multi-index models in high dimensions.

no code implementations • 26 Sep 2017 • Zhuoran Yang, Krishnakumar Balasubramanian, Han Liu

We consider estimating the parametric components of semi-parametric multiple index models in a high-dimensional and non-Gaussian setting.

no code implementations • 24 Sep 2017 • Krishnakumar Balasubramanian, Tong Li, Ming Yuan

The reproducing kernel Hilbert space (RKHS) embedding of distributions offers a general and flexible framework for testing problems in arbitrary domains and has attracted considerable amount of attention in recent years.

no code implementations • ICML 2017 • Zhuoran Yang, Krishnakumar Balasubramanian, Han Liu

We consider estimating the parametric component of single index models in high dimensions.

no code implementations • 26 Sep 2013 • Krishnakumar Balasubramanian, Kai Yu, Tong Zhang

The traditional convex formulation employs the group Lasso relaxation to achieve joint sparsity across tasks.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.