Search Results for author: Arun Jambulapati

Found 12 papers, 1 papers with code

Black-Box $k$-to-$1$-PCA Reductions: Theory and Applications

no code implementations6 Mar 2024 Arun Jambulapati, Syamantak Kumar, Jerry Li, Shourya Pandey, Ankit Pensia, Kevin Tian

The $k$-principal component analysis ($k$-PCA) problem is a fundamental algorithmic primitive that is widely-used in data analysis and dimensionality reduction applications.

Dimensionality Reduction

Testing Causality for High Dimensional Data

no code implementations14 Mar 2023 Arun Jambulapati, Hilaf Hasson, Youngsuk Park, Yuyang Wang

Determining causal relationship between high dimensional observations are among the most important tasks in scientific discoveries.

Vocal Bursts Intensity Prediction

ReSQueing Parallel and Private Stochastic Convex Optimization

no code implementations1 Jan 2023 Yair Carmon, Arun Jambulapati, Yujia Jin, Yin Tat Lee, Daogao Liu, Aaron Sidford, Kevin Tian

We give a parallel algorithm obtaining optimization error $\epsilon_{\text{opt}}$ with $d^{1/3}\epsilon_{\text{opt}}^{-2/3}$ gradient oracle query depth and $d^{1/3}\epsilon_{\text{opt}}^{-2/3} + \epsilon_{\text{opt}}^{-2}$ gradient queries in total, assuming access to a bounded-variance stochastic gradient estimator.

RECAPP: Crafting a More Efficient Catalyst for Convex Optimization

1 code implementation17 Jun 2022 Yair Carmon, Arun Jambulapati, Yujia Jin, Aaron Sidford

The accelerated proximal point algorithm (APPA), also known as "Catalyst", is a well-established reduction from convex optimization to approximate proximal point computation (i. e., regularized minimization).

Robust Regression Revisited: Acceleration and Improved Estimation Rates

no code implementations NeurIPS 2021 Arun Jambulapati, Jerry Li, Tselil Schramm, Kevin Tian

For the general case of smooth GLMs (e. g. logistic regression), we show that the robust gradient descent framework of Prasad et.

regression

Stochastic Bias-Reduced Gradient Methods

no code implementations NeurIPS 2021 Hilal Asi, Yair Carmon, Arun Jambulapati, Yujia Jin, Aaron Sidford

We develop a new primitive for stochastic optimization: a low-bias, low-cost estimator of the minimizer $x_\star$ of any Lipschitz strongly-convex function.

Stochastic Optimization

Thinking Inside the Ball: Near-Optimal Minimization of the Maximal Loss

no code implementations4 May 2021 Yair Carmon, Arun Jambulapati, Yujia Jin, Aaron Sidford

We characterize the complexity of minimizing $\max_{i\in[N]} f_i(x)$ for convex, Lipschitz functions $f_1,\ldots, f_N$.

Fast and Near-Optimal Diagonal Preconditioning

no code implementations4 Aug 2020 Arun Jambulapati, Jerry Li, Christopher Musco, Aaron Sidford, Kevin Tian

In this paper, we revisit the decades-old problem of how to best improve $\mathbf{A}$'s condition number by left or right diagonal rescaling.

Robust Sub-Gaussian Principal Component Analysis and Width-Independent Schatten Packing

no code implementations NeurIPS 2020 Arun Jambulapati, Jerry Li, Kevin Tian

We develop two methods for the following fundamental statistical task: given an $\epsilon$-corrupted set of $n$ samples from a $d$-dimensional sub-Gaussian distribution, return an approximate top eigenvector of the covariance matrix.

A Direct tilde{O}(1/epsilon) Iteration Parallel Algorithm for Optimal Transport

no code implementations NeurIPS 2019 Arun Jambulapati, Aaron Sidford, Kevin Tian

Optimal transportation, or computing the Wasserstein or ``earth mover's'' distance between two $n$-dimensional distributions, is a fundamental primitive which arises in many learning and statistical settings.

A Direct $\tilde{O}(1/ε)$ Iteration Parallel Algorithm for Optimal Transport

no code implementations3 Jun 2019 Arun Jambulapati, Aaron Sidford, Kevin Tian

Optimal transportation, or computing the Wasserstein or ``earth mover's'' distance between two distributions, is a fundamental primitive which arises in many learning and statistical settings.

Cannot find the paper you are looking for? You can Submit a new open access paper.