Search Results for author: Yuanyu Wan

Found 16 papers, 1 papers with code

Projection-free Distributed Online Convex Optimization with $O(\sqrt{T})$ Communication Complexity

no code implementations ICML 2020 Yuanyu Wan, Wei-Wei Tu, Lijun Zhang

To deal with complicated constraints via locally light computation in distributed online learning, recent study has presented a projection-free algorithm called distributed online conditional gradient (D-OCG), and achieved an $O(T^{3/4})$ regret bound, where $T$ is the number of prediction rounds.

Improved Regret for Bandit Convex Optimization with Delayed Feedback

no code implementations14 Feb 2024 Yuanyu Wan, Chang Yao, Mingli Song, Lijun Zhang

Previous studies have established a regret bound of $O(T^{3/4}+d^{1/3}T^{2/3})$ for this problem, where $d$ is the maximum delay, by simply feeding delayed loss values to the classical bandit gradient descent (BGD) algorithm.

Blocking

Nearly Optimal Regret for Decentralized Online Convex Optimization

no code implementations14 Feb 2024 Yuanyu Wan, Tong Wei, Mingli Song, Lijun Zhang

Previous studies have established $O(n^{5/4}\rho^{-1/2}\sqrt{T})$ and ${O}(n^{3/2}\rho^{-1}\log T)$ regret bounds for convex and strongly convex functions respectively, where $n$ is the number of local learners, $\rho<1$ is the spectral gap of the communication matrix, and $T$ is the time horizon.

Adversarial Erasing with Pruned Elements: Towards Better Graph Lottery Ticket

1 code implementation5 Aug 2023 Yuwen Wang, Shunyu Liu, KaiXuan Chen, Tongtian Zhu, Ji Qiao, Mengjie Shi, Yuanyu Wan, Mingli Song

Graph Lottery Ticket (GLT), a combination of core subgraph and sparse subnetwork, has been proposed to mitigate the computational cost of deep Graph Neural Networks (GNNs) on large input graphs while preserving original performance.

Improved Projection-free Online Continuous Submodular Maximization

no code implementations29 May 2023 Yucheng Liao, Yuanyu Wan, Chang Yao, Mingli Song

We investigate the problem of online learning with monotone and continuous DR-submodular reward functions, which has received great attention recently.

Blocking

Non-stationary Online Convex Optimization with Arbitrary Delays

no code implementations20 May 2023 Yuanyu Wan, Chang Yao, Mingli Song, Lijun Zhang

Despite its simplicity, our novel analysis shows that the dynamic regret of DOGD can be automatically bounded by $O(\sqrt{\bar{d}T}(P_T+1))$ under mild assumptions, and $O(\sqrt{dT}(P_T+1))$ in the worst case, where $\bar{d}$ and $d$ denote the average and maximum delay respectively, $T$ is the time horizon, and $P_T$ is the path length of comparators.

Non-stationary Projection-free Online Learning with Dynamic and Adaptive Regret Guarantees

no code implementations19 May 2023 Yibo Wang, Wenhao Yang, Wei Jiang, Shiyin Lu, Bing Wang, Haihong Tang, Yuanyu Wan, Lijun Zhang

Specifically, we first provide a novel dynamic regret analysis for an existing projection-free method named $\text{BOGD}_\text{IP}$, and establish an $\mathcal{O}(T^{3/4}(1+P_T))$ dynamic regret bound, where $P_T$ denotes the path-length of the comparator sequence.

Improved Dynamic Regret for Online Frank-Wolfe

no code implementations11 Feb 2023 Yuanyu Wan, Lijun Zhang, Mingli Song

In this way, we first show that the dynamic regret bound of OFW can be improved to $O(\sqrt{T(1+V_T)})$ for smooth functions.

Projection-free Online Learning with Arbitrary Delays

no code implementations11 Apr 2022 Yuanyu Wan, Yibo Wang, Chang Yao, Wei-Wei Tu, Lijun Zhang

Projection-free online learning, which eschews the projection operation via less expensive computations such as linear optimization (LO), has received much interest recently due to its efficiency in handling high-dimensional problems with complex constraints.

Coupling Online-Offline Learning for Multi-distributional Data Streams

no code implementations12 Feb 2022 Zhilin Zhao, Longbing Cao, Yuanyu Wan

CO$_2$ extracts knowledge by training an offline expert for each offline interval and update an online expert by an off-the-shelf online optimization method in the online interval.

Transfer Learning

Online Strongly Convex Optimization with Unknown Delays

no code implementations21 Mar 2021 Yuanyu Wan, Wei-Wei Tu, Lijun Zhang

Specifically, we first extend the delayed variant of OGD for strongly convex functions, and establish a better regret bound of $O(d\log T)$, where $d$ is the maximum delay.

Online Convex Optimization with Continuous Switching Constraint

no code implementations NeurIPS 2021 Guanghui Wang, Yuanyu Wan, Tianbao Yang, Lijun Zhang

To control the switching cost, we introduce the problem of online convex optimization with continuous switching constraint, where the goal is to achieve a small regret given a budget on the \emph{overall} switching cost.

Decision Making

Projection-free Distributed Online Learning with Sublinear Communication Complexity

no code implementations20 Mar 2021 Yuanyu Wan, Guanghui Wang, Wei-Wei Tu, Lijun Zhang

In this paper, we propose an improved variant of D-OCG, namely D-BOCG, which can attain the same $O(T^{3/4})$ regret bound with only $O(\sqrt{T})$ communication rounds for convex losses, and a better regret bound of $O(T^{2/3}(\log T)^{1/3})$ with fewer $O(T^{1/3}(\log T)^{2/3})$ communication rounds for strongly convex losses.

Projection-free Online Learning over Strongly Convex Sets

no code implementations16 Oct 2020 Yuanyu Wan, Lijun Zhang

In this paper, we study the special case of online learning over strongly convex sets, for which we first prove that OFW enjoys a better regret bound of $O(T^{2/3})$ for general convex losses.

Approximate Multiplication of Sparse Matrices with Limited Space

no code implementations8 Sep 2020 Yuanyu Wan, Lijun Zhang

In this paper, we propose to reduce the time complexity by exploiting the sparsity of the input matrices.

Matrix Completion from Non-Uniformly Sampled Entries

no code implementations27 Jun 2018 Yuanyu Wan, Jin-Feng Yi, Lijun Zhang

Then, for each partially observed column, we recover it by finding a vector which lies in the recovered column space and consists of the observed entries.

Matrix Completion

Cannot find the paper you are looking for? You can Submit a new open access paper.