Search Results for author: Ethan X. Fang

Found 11 papers, 0 papers with code

Lagrangian Inference for Ranking Problems

no code implementations1 Oct 2021 Yue Liu, Ethan X. Fang, Junwei Lu

Our proposed method aims to infer general ranking properties of the BTL model.

Implicit Regularization of Bregman Proximal Point Algorithm and Mirror Descent on Separable Data

no code implementations15 Aug 2021 Yan Li, Caleb Ju, Ethan X. Fang, Tuo Zhao

We show that BPPA attains non-trivial margin, which closely depends on the condition number of the distance generating function inducing the Bregman divergence.

Risk-Sensitive Deep RL: Variance-Constrained Actor-Critic Provably Finds Globally Optimal Policy

no code implementations28 Dec 2020 Han Zhong, Ethan X. Fang, Zhuoran Yang, Zhaoran Wang

In particular, we focus on a variance-constrained policy optimization problem where the goal is to find a policy that maximizes the expected value of the long-run average reward, subject to a constraint that the long-run variance of the average reward is upper bounded by a threshold.

Implicit Bias of Gradient Descent based Adversarial Training on Separable Data

no code implementations ICLR 2020 Yan Li, Ethan X. Fang, Huan Xu, Tuo Zhao

Specifically, we show that for any fixed iteration $T$, when the adversarial perturbation during training has proper bounded L2 norm, the classifier learned by gradient descent based adversarial training converges in direction to the maximum L2 norm margin classifier at the rate of $O(1/\sqrt{T})$, significantly faster than the rate $O(1/\log T}$ of training with clean data.

Inductive Bias of Gradient Descent based Adversarial Training on Separable Data

no code implementations7 Jun 2019 Yan Li, Ethan X. Fang, Huan Xu, Tuo Zhao

Specifically, we show that when the adversarial perturbation during training has bounded $\ell_2$-norm, the classifier learned by gradient descent based adversarial training converges in direction to the maximum $\ell_2$-norm margin classifier at the rate of $\tilde{\mathcal{O}}(1/\sqrt{T})$, significantly faster than the rate $\mathcal{O}(1/\log T)$ of training with clean data.

Misspecified Nonconvex Statistical Optimization for Phase Retrieval

no code implementations18 Dec 2017 Zhuoran Yang, Lin F. Yang, Ethan X. Fang, Tuo Zhao, Zhaoran Wang, Matey Neykov

Existing nonconvex statistical optimization theory and methods crucially rely on the correct specification of the underlying "true" statistical models.

Max-Norm Optimization for Robust Matrix Recovery

no code implementations24 Sep 2016 Ethan X. Fang, Han Liu, Kim-Chuan Toh, Wen-Xin Zhou

This paper studies the matrix completion problem under arbitrary sampling schemes.

Matrix Completion

Accelerating Stochastic Composition Optimization

no code implementations NeurIPS 2016 Mengdi Wang, Ji Liu, Ethan X. Fang

The ASC-PG is the first proximal gradient method for the stochastic composition problem that can deal with nonsmooth regularization penalty.

Testing and Confidence Intervals for High Dimensional Proportional Hazards Model

no code implementations16 Dec 2014 Ethan X. Fang, Yang Ning, Han Liu

This paper proposes a decorrelation-based approach to test hypotheses and construct confidence intervals for the low dimensional component of high dimensional proportional hazards models.

Model Selection

Stochastic Compositional Gradient Descent: Algorithms for Minimizing Compositions of Expected-Value Functions

no code implementations14 Nov 2014 Mengdi Wang, Ethan X. Fang, Han Liu

For smooth convex problems, the SCGD can be accelerated to converge at a rate of $O(k^{-2/7})$ in the general case and $O(k^{-4/5})$ in the strongly convex case.

Cannot find the paper you are looking for? You can Submit a new open access paper.