Search Results for author: Yujia Jin

Found 14 papers, 1 papers with code

Moments, Random Walks, and Limits for Spectrum Approximation

no code implementations2 Jul 2023 Yujia Jin, Christopher Musco, Aaron Sidford, Apoorv Vikram Singh

We study lower bounds for the problem of approximating a one dimensional distribution given (noisy) measurements of its moments.

ReSQueing Parallel and Private Stochastic Convex Optimization

no code implementations1 Jan 2023 Yair Carmon, Arun Jambulapati, Yujia Jin, Yin Tat Lee, Daogao Liu, Aaron Sidford, Kevin Tian

We give a parallel algorithm obtaining optimization error $\epsilon_{\text{opt}}$ with $d^{1/3}\epsilon_{\text{opt}}^{-2/3}$ gradient oracle query depth and $d^{1/3}\epsilon_{\text{opt}}^{-2/3} + \epsilon_{\text{opt}}^{-2}$ gradient queries in total, assuming access to a bounded-variance stochastic gradient estimator.

VO$Q$L: Towards Optimal Regret in Model-free RL with Nonlinear Function Approximation

no code implementations12 Dec 2022 Alekh Agarwal, Yujia Jin, Tong Zhang

We study time-inhomogeneous episodic reinforcement learning (RL) under general function approximation and sparse rewards.

Q-Learning regression +1

RECAPP: Crafting a More Efficient Catalyst for Convex Optimization

1 code implementation17 Jun 2022 Yair Carmon, Arun Jambulapati, Yujia Jin, Aaron Sidford

The accelerated proximal point algorithm (APPA), also known as "Catalyst", is a well-established reduction from convex optimization to approximate proximal point computation (i. e., regularized minimization).

Sharper Rates for Separable Minimax and Finite Sum Optimization via Primal-Dual Extragradient Methods

no code implementations9 Feb 2022 Yujia Jin, Aaron Sidford, Kevin Tian

We generalize our algorithms for minimax and finite sum optimization to solve a natural family of minimax finite sum optimization problems at an accelerated rate, encapsulating both above results up to a logarithmic factor.

A Unified Framework for Multi-distribution Density Ratio Estimation

no code implementations7 Dec 2021 Lantao Yu, Yujia Jin, Stefano Ermon

Binary density ratio estimation (DRE), the problem of estimating the ratio $p_1/p_2$ given their empirical samples, provides the foundation for many state-of-the-art machine learning algorithms such as contrastive representation learning and covariate shift adaptation.

Density Ratio Estimation Representation Learning

Stochastic Bias-Reduced Gradient Methods

no code implementations NeurIPS 2021 Hilal Asi, Yair Carmon, Arun Jambulapati, Yujia Jin, Aaron Sidford

We develop a new primitive for stochastic optimization: a low-bias, low-cost estimator of the minimizer $x_\star$ of any Lipschitz strongly-convex function.

Stochastic Optimization

Towards Tight Bounds on the Sample Complexity of Average-reward MDPs

no code implementations13 Jun 2021 Yujia Jin, Aaron Sidford

We prove new upper and lower bounds for sample complexity of finding an $\epsilon$-optimal policy of an infinite-horizon average-reward Markov decision process (MDP) given access to a generative model.

Thinking Inside the Ball: Near-Optimal Minimization of the Maximal Loss

no code implementations4 May 2021 Yair Carmon, Arun Jambulapati, Yujia Jin, Aaron Sidford

We characterize the complexity of minimizing $\max_{i\in[N]} f_i(x)$ for convex, Lipschitz functions $f_1,\ldots, f_N$.

Coordinate Methods for Matrix Games

no code implementations17 Sep 2020 Yair Carmon, Yujia Jin, Aaron Sidford, Kevin Tian

For linear regression with an elementwise nonnegative matrix, our guarantees improve on exact gradient methods by a factor of $\sqrt{\mathrm{nnz}(A)/(m+n)}$.

regression

Efficiently Solving MDPs with Stochastic Mirror Descent

no code implementations ICML 2020 Yujia Jin, Aaron Sidford

We present a unified framework based on primal-dual stochastic mirror descent for approximately solving infinite-horizon Markov decision processes (MDPs) given a generative model.

Principal Component Projection and Regression in Nearly Linear Time through Asymmetric SVRG

no code implementations NeurIPS 2019 Yujia Jin, Aaron Sidford

Given a data matrix $\mathbf{A} \in \mathbb{R}^{n \times d}$, principal component projection (PCP) and principal component regression (PCR), i. e. projection and regression restricted to the top-eigenspace of $\mathbf{A}$, are fundamental problems in machine learning, optimization, and numerical analysis.

regression

Variance Reduction for Matrix Games

no code implementations NeurIPS 2019 Yair Carmon, Yujia Jin, Aaron Sidford, Kevin Tian

We present a randomized primal-dual algorithm that solves the problem $\min_{x} \max_{y} y^\top A x$ to additive error $\epsilon$ in time $\mathrm{nnz}(A) + \sqrt{\mathrm{nnz}(A)n}/\epsilon$, for matrix $A$ with larger dimension $n$ and $\mathrm{nnz}(A)$ nonzero entries.

Cannot find the paper you are looking for? You can Submit a new open access paper.