no code implementations • 6 Mar 2024 • Arun Jambulapati, Syamantak Kumar, Jerry Li, Shourya Pandey, Ankit Pensia, Kevin Tian
The $k$-principal component analysis ($k$-PCA) problem is a fundamental algorithmic primitive that is widely-used in data analysis and dimensionality reduction applications.
no code implementations • 17 Nov 2023 • Yair Carmon, Arun Jambulapati, Yujia Jin, Aaron Sidford
For $n>d$ and $\epsilon=1/\sqrt{n}$ this improves over all existing first-order methods.
no code implementations • 14 Mar 2023 • Arun Jambulapati, Hilaf Hasson, Youngsuk Park, Yuyang Wang
Determining causal relationship between high dimensional observations are among the most important tasks in scientific discoveries.
no code implementations • 1 Jan 2023 • Yair Carmon, Arun Jambulapati, Yujia Jin, Yin Tat Lee, Daogao Liu, Aaron Sidford, Kevin Tian
We give a parallel algorithm obtaining optimization error $\epsilon_{\text{opt}}$ with $d^{1/3}\epsilon_{\text{opt}}^{-2/3}$ gradient oracle query depth and $d^{1/3}\epsilon_{\text{opt}}^{-2/3} + \epsilon_{\text{opt}}^{-2}$ gradient queries in total, assuming access to a bounded-variance stochastic gradient estimator.
1 code implementation • 17 Jun 2022 • Yair Carmon, Arun Jambulapati, Yujia Jin, Aaron Sidford
The accelerated proximal point algorithm (APPA), also known as "Catalyst", is a well-established reduction from convex optimization to approximate proximal point computation (i. e., regularized minimization).
no code implementations • NeurIPS 2021 • Arun Jambulapati, Jerry Li, Tselil Schramm, Kevin Tian
For the general case of smooth GLMs (e. g. logistic regression), we show that the robust gradient descent framework of Prasad et.
no code implementations • NeurIPS 2021 • Hilal Asi, Yair Carmon, Arun Jambulapati, Yujia Jin, Aaron Sidford
We develop a new primitive for stochastic optimization: a low-bias, low-cost estimator of the minimizer $x_\star$ of any Lipschitz strongly-convex function.
no code implementations • 4 May 2021 • Yair Carmon, Arun Jambulapati, Yujia Jin, Aaron Sidford
We characterize the complexity of minimizing $\max_{i\in[N]} f_i(x)$ for convex, Lipschitz functions $f_1,\ldots, f_N$.
no code implementations • 4 Aug 2020 • Arun Jambulapati, Jerry Li, Christopher Musco, Aaron Sidford, Kevin Tian
In this paper, we revisit the decades-old problem of how to best improve $\mathbf{A}$'s condition number by left or right diagonal rescaling.
no code implementations • NeurIPS 2020 • Arun Jambulapati, Jerry Li, Kevin Tian
We develop two methods for the following fundamental statistical task: given an $\epsilon$-corrupted set of $n$ samples from a $d$-dimensional sub-Gaussian distribution, return an approximate top eigenvector of the covariance matrix.
no code implementations • NeurIPS 2019 • Arun Jambulapati, Aaron Sidford, Kevin Tian
Optimal transportation, or computing the Wasserstein or ``earth mover's'' distance between two $n$-dimensional distributions, is a fundamental primitive which arises in many learning and statistical settings.
no code implementations • 3 Jun 2019 • Arun Jambulapati, Aaron Sidford, Kevin Tian
Optimal transportation, or computing the Wasserstein or ``earth mover's'' distance between two distributions, is a fundamental primitive which arises in many learning and statistical settings.