Search Results for author: Krishnakumar Balasubramanian

Found 30 papers, 0 papers with code

Projection-free Constrained Stochastic Nonconvex Optimization with State-dependent Markov Data

no code implementations22 Jun 2022 Abhishek Roy, Krishnakumar Balasubramanian, Saeed Ghadimi

We study a projection-free conditional gradient-type algorithm for constrained nonconvex stochastic optimization problems with Markovian data.

reinforcement-learning Stochastic Optimization

Towards a Theory of Non-Log-Concave Sampling: First-Order Stationarity Guarantees for Langevin Monte Carlo

no code implementations10 Feb 2022 Krishnakumar Balasubramanian, Sinho Chewi, Murat A. Erdogdu, Adil Salim, Matthew Zhang

For the task of sampling from a density $\pi \propto \exp(-V)$ on $\mathbb{R}^d$, where $V$ is possibly non-convex but $L$-gradient Lipschitz, we prove that averaged Langevin Monte Carlo outputs a sample with $\varepsilon$-relative Fisher information after $O( L^2 d^2/\varepsilon^2)$ iterations.

A Projection-free Algorithm for Constrained Stochastic Multi-level Composition Optimization

no code implementations9 Feb 2022 Tesi Xiao, Krishnakumar Balasubramanian, Saeed Ghadimi

We propose a projection-free conditional gradient-type algorithm for smooth stochastic multi-level composition optimization, where the objective function is a nested composition of $T$ functions and the constraint set is a closed convex set.

Heavy-tailed Sampling via Transformed Unadjusted Langevin Algorithm

no code implementations20 Jan 2022 Ye He, Krishnakumar Balasubramanian, Murat A. Erdogdu

We analyze the oracle complexity of sampling from polynomially decaying heavy-tailed target densities based on running the Unadjusted Langevin Algorithm on certain transformed versions of the target density.

Topologically penalized regression on manifolds

no code implementations26 Oct 2021 Olympio Hacquard, Krishnakumar Balasubramanian, Gilles Blanchard, Clément Levrard, Wolfgang Polonik

We study a regression problem on a compact manifold M. In order to take advantage of the underlying geometry and topology of the data, the regression task is performed on the basis of the first several eigenfunctions of the Laplace-Beltrami operator of the manifold, that are regularized with topological penalties.

On Empirical Risk Minimization with Dependent and Heavy-Tailed Data

no code implementations NeurIPS 2021 Abhishek Roy, Krishnakumar Balasubramanian, Murat A. Erdogdu

In this work, we establish risk bounds for the Empirical Risk Minimization (ERM) with both dependent and heavy-tailed data-generating processes.

Learning Theory

Nonparametric Modeling of Higher-Order Interactions via Hypergraphons

no code implementations18 May 2021 Krishnakumar Balasubramanian

We study statistical and algorithmic aspects of using hypergraphons, that are limits of large hypergraphs, for modeling higher-order interactions.

Statistical Inference for Polyak-Ruppert Averaged Zeroth-order Stochastic Gradient Algorithm

no code implementations10 Feb 2021 Yanhao Jin, Tesi Xiao, Krishnakumar Balasubramanian

Statistical machine learning models trained with stochastic gradient algorithms are increasingly being deployed in critical scientific applications.

BIG-bench Machine Learning

Escaping Saddle-Point Faster under Interpolation-like Conditions

no code implementations NeurIPS 2020 Abhishek Roy, Krishnakumar Balasubramanian, Saeed Ghadimi, Prasant Mohapatra

We next analyze Stochastic Cubic-Regularized Newton (SCRN) algorithm under interpolation-like conditions, and show that the oracle complexity to reach an $\epsilon$-local-minimizer under interpolation-like conditions, is $O(1/\epsilon^{2. 5})$.

Stochastic Optimization

On the Ergodicity, Bias and Asymptotic Normality of Randomized Midpoint Sampling Method

no code implementations NeurIPS 2020 Ye He, Krishnakumar Balasubramanian, Murat A. Erdogdu

The randomized midpoint method, proposed by [SL19], has emerged as an optimal discretization procedure for simulating the continuous time Langevin diffusions.

Numerical Integration

Escaping Saddle-Points Faster under Interpolation-like Conditions

no code implementations28 Sep 2020 Abhishek Roy, Krishnakumar Balasubramanian, Saeed Ghadimi, Prasant Mohapatra

We next analyze Stochastic Cubic-Regularized Newton (SCRN) algorithm under interpolation-like conditions, and show that the oracle complexity to reach an $\epsilon$-local-minimizer under interpolation-like conditions, is $\tilde{\mathcal{O}}(1/\epsilon^{2. 5})$.

Stochastic Optimization

Stochastic Multi-level Composition Optimization Algorithms with Level-Independent Convergence Rates

no code implementations24 Aug 2020 Krishnakumar Balasubramanian, Saeed Ghadimi, Anthony Nguyen

We show that the first algorithm, which is a generalization of \cite{GhaRuswan20} to the $T$ level case, can achieve a sample complexity of $\mathcal{O}(1/\epsilon^6)$ by using mini-batches of samples in each iteration.

Fractal Gaussian Networks: A sparse random graph model based on Gaussian Multiplicative Chaos

no code implementations ICML 2020 Subhroshekhar Ghosh, Krishnakumar Balasubramanian, Xiaochuan Yang

We propose a novel stochastic network model, called Fractal Gaussian Network (FGN), that embodies well-defined and analytically tractable fractal structures.

Stochastic Block Model

Improved Complexities for Stochastic Conditional Gradient Methods under Interpolation-like Conditions

no code implementations15 Jun 2020 Tesi Xiao, Krishnakumar Balasubramanian, Saeed Ghadimi

We analyze stochastic conditional gradient methods for constrained optimization problems arising in over-parametrized machine learning.

BIG-bench Machine Learning

An Analysis of Constant Step Size SGD in the Non-convex Regime: Asymptotic Normality and Bias

no code implementations NeurIPS 2021 Lu Yu, Krishnakumar Balasubramanian, Stanislav Volgushev, Murat A. Erdogdu

Structured non-convex learning problems, for which critical points have favorable statistical properties, arise frequently in statistical machine learning.

Stochastic Zeroth-order Riemannian Derivative Estimation and Optimization

no code implementations25 Mar 2020 Jiaxiang Li, Krishnakumar Balasubramanian, Shiqian Ma

We consider stochastic zeroth-order optimization over Riemannian submanifolds embedded in Euclidean space, where the task is to solve Riemannian optimization problem with only noisy objective function evaluations.

Riemannian optimization

Multi-Point Bandit Algorithms for Nonstationary Online Nonconvex Optimization

no code implementations31 Jul 2019 Abhishek Roy, Krishnakumar Balasubramanian, Saeed Ghadimi, Prasant Mohapatra

In this paper, motivated by online reinforcement learning problems, we propose and analyze bandit algorithms for both general and structured nonconvex problems with nonstationary (or dynamic) regret as the performance measure, in both stochastic and non-stochastic settings.

Normal Approximation for Stochastic Gradient Descent via Non-Asymptotic Rates of Martingale CLT

no code implementations3 Apr 2019 Andreas Anastasiou, Krishnakumar Balasubramanian, Murat A. Erdogdu

A crucial intermediate step is proving a non-asymptotic martingale central limit theorem (CLT), i. e., establishing the rates of convergence of a multivariate martingale difference sequence to a normal random vector, which might be of independent interest.

Stochastic Zeroth-order Discretizations of Langevin Diffusions for Bayesian Inference

no code implementations4 Feb 2019 Abhishek Roy, Lingqing Shen, Krishnakumar Balasubramanian, Saeed Ghadimi

Our theoretical contributions extend the practical applicability of sampling algorithms to the noisy black-box and high-dimensional settings.

Bayesian Inference Stochastic Optimization +1

Zeroth-order Nonconvex Stochastic Optimization: Handling Constraints, High-Dimensionality and Saddle-Points

no code implementations NeurIPS 2018 Krishnakumar Balasubramanian, Saeed Ghadimi

In this paper, we propose and analyze zeroth-order stochastic approximation algorithms for nonconvex and convex optimization, with a focus on addressing constrained optimization, high-dimensional setting and saddle-point avoiding.

Stochastic Optimization

Tensor Methods for Additive Index Models under Discordance and Heterogeneity

no code implementations17 Jul 2018 Krishnakumar Balasubramanian, Jianqing Fan, Zhuoran Yang

Motivated by the sampling problems and heterogeneity issues common in high- dimensional big datasets, we consider a class of discordant additive index models.

On Stein's Identity and Near-Optimal Estimation in High-dimensional Index Models

no code implementations26 Sep 2017 Zhuoran Yang, Krishnakumar Balasubramanian, Han Liu

We consider estimating the parametric components of semi-parametric multiple index models in a high-dimensional and non-Gaussian setting.

On the Optimality of Kernel-Embedding Based Goodness-of-Fit Tests

no code implementations24 Sep 2017 Krishnakumar Balasubramanian, Tong Li, Ming Yuan

The reproducing kernel Hilbert space (RKHS) embedding of distributions offers a general and flexible framework for testing problems in arbitrary domains and has attracted considerable amount of attention in recent years.

High-dimensional Joint Sparsity Random Effects Model for Multi-task Learning

no code implementations26 Sep 2013 Krishnakumar Balasubramanian, Kai Yu, Tong Zhang

The traditional convex formulation employs the group Lasso relaxation to achieve joint sparsity across tasks.

Multi-Task Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.