no code implementations • ICML 2020 • Yuanyu Wan, Wei-Wei Tu, Lijun Zhang
To deal with complicated constraints via locally light computation in distributed online learning, recent study has presented a projection-free algorithm called distributed online conditional gradient (D-OCG), and achieved an $O(T^{3/4})$ regret bound, where $T$ is the number of prediction rounds.
1 code implementation • 8 Oct 2024 • Zi-Hao Zhou, Siyuan Fang, Zi-Jing Zhou, Tong Wei, Yuanyu Wan, Min-Ling Zhang
By progressively estimating the underlying label distribution and optimizing its alignment with model predictions, we tackle the diverse distribution of unlabeled data in real-world scenarios.
no code implementations • 6 Jun 2024 • Wei Jiang, Sifan Yang, Wenhao Yang, Yibo Wang, Yuanyu Wan, Lijun Zhang
Existing projection-free algorithms for solving this problem suffer from two limitations: 1) they solely focus on the gradient mapping criterion and fail to match the optimal sample complexities in unconstrained settings; 2) their analysis is exclusively applicable to non-convex functions, without considering convex and strongly convex objectives.
no code implementations • 14 Feb 2024 • Yuanyu Wan, Tong Wei, Bo Xue, Mingli Song, Lijun Zhang
Our analysis reveals that the projection-free variant can achieve ${O}(nT^{3/4})$ and ${O}(nT^{2/3}(\log T)^{1/3})$ regret bounds for convex and strongly convex functions with nearly optimal $\tilde{O}(\rho^{-1/2}\sqrt{T})$ and $\tilde{O}(\rho^{-1/2}T^{1/3}(\log T)^{2/3})$ communication rounds, respectively.
no code implementations • 14 Feb 2024 • Yuanyu Wan, Chang Yao, Mingli Song, Lijun Zhang
We investigate bandit convex optimization (BCO) with delayed feedback, where only the loss value of the action is revealed under an arbitrary delay.
1 code implementation • 5 Aug 2023 • Yuwen Wang, Shunyu Liu, KaiXuan Chen, Tongtian Zhu, Ji Qiao, Mengjie Shi, Yuanyu Wan, Mingli Song
Graph Lottery Ticket (GLT), a combination of core subgraph and sparse subnetwork, has been proposed to mitigate the computational cost of deep Graph Neural Networks (GNNs) on large input graphs while preserving original performance.
no code implementations • 29 May 2023 • Yucheng Liao, Yuanyu Wan, Chang Yao, Mingli Song
We investigate the problem of online learning with monotone and continuous DR-submodular reward functions, which has received great attention recently.
no code implementations • 20 May 2023 • Yuanyu Wan, Chang Yao, Mingli Song, Lijun Zhang
Despite its simplicity, our novel analysis shows that the dynamic regret of DOGD can be automatically bounded by $O(\sqrt{\bar{d}T}(P_T+1))$ under mild assumptions, and $O(\sqrt{dT}(P_T+1))$ in the worst case, where $\bar{d}$ and $d$ denote the average and maximum delay respectively, $T$ is the time horizon, and $P_T$ is the path-length of comparators.
no code implementations • 19 May 2023 • Yibo Wang, Wenhao Yang, Wei Jiang, Shiyin Lu, Bing Wang, Haihong Tang, Yuanyu Wan, Lijun Zhang
Specifically, we first provide a novel dynamic regret analysis for an existing projection-free method named $\text{BOGD}_\text{IP}$, and establish an $\mathcal{O}(T^{3/4}(1+P_T))$ dynamic regret bound, where $P_T$ denotes the path-length of the comparator sequence.
no code implementations • 11 Feb 2023 • Yuanyu Wan, Lijun Zhang, Mingli Song
In this way, we first show that the dynamic regret bound of OFW can be improved to $O(\sqrt{T(V_T+1)})$ for smooth functions.
no code implementations • 11 Apr 2022 • Yuanyu Wan, Yibo Wang, Chang Yao, Wei-Wei Tu, Lijun Zhang
Projection-free online learning, which eschews the projection operation via less expensive computations such as linear optimization (LO), has received much interest recently due to its efficiency in handling high-dimensional problems with complex constraints.
1 code implementation • 12 Feb 2022 • Zhilin Zhao, Longbing Cao, Yuanyu Wan
MOOE learns static offline experts from offline intervals and maintains a dynamic online expert for the current online interval.
no code implementations • NeurIPS 2021 • Guanghui Wang, Yuanyu Wan, Tianbao Yang, Lijun Zhang
To control the switching cost, we introduce the problem of online convex optimization with continuous switching constraint, where the goal is to achieve a small regret given a budget on the \emph{overall} switching cost.
no code implementations • 21 Mar 2021 • Yuanyu Wan, Wei-Wei Tu, Lijun Zhang
Specifically, we first extend the delayed variant of OGD for strongly convex functions, and establish a better regret bound of $O(d\log T)$, where $d$ is the maximum delay.
no code implementations • 20 Mar 2021 • Yuanyu Wan, Guanghui Wang, Wei-Wei Tu, Lijun Zhang
In this paper, we propose an improved variant of D-OCG, namely D-BOCG, which can attain the same $O(T^{3/4})$ regret bound with only $O(\sqrt{T})$ communication rounds for convex losses, and a better regret bound of $O(T^{2/3}(\log T)^{1/3})$ with fewer $O(T^{1/3}(\log T)^{2/3})$ communication rounds for strongly convex losses.
no code implementations • 16 Oct 2020 • Yuanyu Wan, Lijun Zhang
In this paper, we study the special case of online learning over strongly convex sets, for which we first prove that OFW can enjoy a better regret bound of $O(T^{2/3})$ for general convex losses.
no code implementations • 8 Sep 2020 • Yuanyu Wan, Lijun Zhang
In this paper, we propose to reduce the time complexity by exploiting the sparsity of the input matrices.
no code implementations • 27 Jun 2018 • Yuanyu Wan, Jin-Feng Yi, Lijun Zhang
Then, for each partially observed column, we recover it by finding a vector which lies in the recovered column space and consists of the observed entries.