1 code implementation • 9 Apr 2024 • Yupei Zhang, Li Pan, Qiushi Yang, Tan Li, Zhen Chen
Specifically, to enhance the representation abilities of vision and language encoders, we propose the Multi-level Reconstruction Pre-training (MR-Pretrain) strategy, including a feature-level and data-level reconstruction, which guides models to capture the semantic information from masked inputs of different modalities.
no code implementations • 2 Feb 2024 • Guangfeng Yan, Tan Li, Yuanzhang Xiao, Congduan Li, Linqi Song
To address the communication bottleneck challenge in distributed learning, our work introduces a novel two-stage quantization strategy designed to enhance the communication efficiency of distributed Stochastic Gradient Descent (SGD).
no code implementations • 2 Feb 2024 • Guangfeng Yan, Tan Li, Yuanzhang Xiao, Hanxu Hou, Linqi Song
We consider a general family of heavy-tail gradients that follow a power-law distribution, we aim to minimize the error resulting from quantization, thereby determining optimal values for two critical parameters: the truncation threshold and the quantization density.
no code implementations • 26 Apr 2023 • Guangfeng Yan, Tan Li, Kui Wu, Linqi Song
Communication efficiency and privacy protection are two critical issues in distributed machine learning.
no code implementations • 2 Nov 2021 • Tan Li, Linqi Song
Our bandit learning algorithms are based on epoch-wise sub-optimal arm eliminations at each agent and agents exchange learning knowledge with the server/each other at the end of each epoch.
no code implementations • 3 Dec 2020 • Jing Dong, Tan Li, Shaolei Ren, Linqi Song
To further improve the performance of distributed Thompson Sampling, we propose a distributed Elimination based Thompson Sampling algorithm that allow the agents to learn collaboratively.
no code implementations • 14 May 2020 • Tan Li, Linqi Song, Christina Fragouli
In this paper, we are interested in what we term the federated private bandits framework, that combines differential privacy with multi-agent bandit learning.
no code implementations • 5 Aug 2019 • Jie Lin, Dan-Bo Zhang, Shuo Zhang, Xiang Wang, Tan Li, Wan-su Bao
We also incorporate kernel methods into the above quantum algorithms, which uses both exponential growth Hilbert space of qubits and infinite dimensionality of continuous variable for quantum feature maps.