no code implementations • 18 Sep 2023 • Ethan X. Fang, Yajun Mei, Yuyang Shi, Qunzhi Xu, Tuo Zhao

We consider the linear discriminant analysis problem in the high-dimensional settings.

no code implementations • 8 Feb 2023 • Juncheng Dong, Weibin Mo, Zhengling Qi, Cong Shi, Ethan X. Fang, Vahid Tarokh

The objective is to use the offline dataset to find an optimal assortment.

no code implementations • 28 Jan 2023 • Shuting Shen, Xi Chen, Ethan X. Fang, Junwei Lu

Assortment optimization has received active explorations in the past few decades due to its practical importance.

no code implementations • 9 Sep 2022 • Shuoguang Yang, Zhe Zhang, Ethan X. Fang

Stochastic compositional optimization (SCO) has attracted considerable attention because of its broad applicability to important real-world problems.

no code implementations • 1 Oct 2021 • Yue Liu, Ethan X. Fang, Junwei Lu

Our proposed method aims to infer general ranking properties of the BTL model.

no code implementations • 15 Aug 2021 • Yan Li, Caleb Ju, Ethan X. Fang, Tuo Zhao

For any BPPA instantiated with a fixed Bregman divergence, we provide a lower bound of the margin obtained by BPPA with respect to an arbitrarily chosen norm.

no code implementations • 28 Dec 2020 • Han Zhong, Xun Deng, Ethan X. Fang, Zhuoran Yang, Zhaoran Wang, Runze Li

In particular, we focus on a variance-constrained policy optimization problem where the goal is to find a policy that maximizes the expected value of the long-run average reward, subject to a constraint that the long-run variance of the average reward is upper bounded by a threshold.

no code implementations • 4 Sep 2020 • Yining Wang, Yi Chen, Ethan X. Fang, Zhaoran Wang, Runze Li

We consider the stochastic contextual bandit problem under the high dimensional linear model.

no code implementations • ICLR 2020 • Yan Li, Ethan X. Fang, Huan Xu, Tuo Zhao

Specifically, we show that for any fixed iteration $T$, when the adversarial perturbation during training has proper bounded L2 norm, the classifier learned by gradient descent based adversarial training converges in direction to the maximum L2 norm margin classifier at the rate of $O(1/\sqrt{T})$, significantly faster than the rate $O(1/\log T}$ of training with clean data.

no code implementations • 7 Jun 2019 • Yan Li, Ethan X. Fang, Huan Xu, Tuo Zhao

Specifically, we show that when the adversarial perturbation during training has bounded $\ell_2$-norm, the classifier learned by gradient descent based adversarial training converges in direction to the maximum $\ell_2$-norm margin classifier at the rate of $\tilde{\mathcal{O}}(1/\sqrt{T})$, significantly faster than the rate $\mathcal{O}(1/\log T)$ of training with clean data.

no code implementations • 18 Dec 2017 • Zhuoran Yang, Lin F. Yang, Ethan X. Fang, Tuo Zhao, Zhaoran Wang, Matey Neykov

Existing nonconvex statistical optimization theory and methods crucially rely on the correct specification of the underlying "true" statistical models.

no code implementations • 24 Sep 2016 • Ethan X. Fang, Han Liu, Kim-Chuan Toh, Wen-Xin Zhou

This paper studies the matrix completion problem under arbitrary sampling schemes.

no code implementations • NeurIPS 2016 • Mengdi Wang, Ji Liu, Ethan X. Fang

The ASC-PG is the first proximal gradient method for the stochastic composition problem that can deal with nonsmooth regularization penalty.

no code implementations • 16 Dec 2014 • Ethan X. Fang, Yang Ning, Han Liu

This paper proposes a decorrelation-based approach to test hypotheses and construct confidence intervals for the low dimensional component of high dimensional proportional hazards models.

no code implementations • 14 Nov 2014 • Mengdi Wang, Ethan X. Fang, Han Liu

For smooth convex problems, the SCGD can be accelerated to converge at a rate of $O(k^{-2/7})$ in the general case and $O(k^{-4/5})$ in the strongly convex case.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.