no code implementations • 17 Oct 2024 • Yue Huang, Zhaoxian Wu, Shiqian Ma, Qing Ling
Stochastic approximation (SA) that involves multiple coupled sequences, known as multiple-sequence SA (MSSA), finds diverse applications in the fields of signal processing and machine learning.
1 code implementation • 11 Jul 2024 • Haoxiang Ye, Qing Ling
Recently, decentralized learning has emerged as a popular peer-to-peer signal and information processing paradigm that enables model training across geographically distributed agents in a scalable manner, without the presence of any central server.
1 code implementation • 21 Apr 2024 • Jie Peng, Weiyu Li, Qing Ling
Robustness to malicious attacks is of paramount importance for distributed learning.
1 code implementation • 28 Aug 2023 • Haoxiang Ye, Heng Zhu, Qing Ling
For a class of state-of-the-art robust aggregation rules, we give unified analysis of the "mixing abilities".
no code implementations • 10 Aug 2023 • Jie Peng, Weiyu Li, Qing Ling
Motivated by this observation, we introduce two variance reduction methods, stochastic average gradient algorithm (SAGA) and loopless stochastic variance-reduced gradient (LSVRG), to Byzantine-robust decentralized stochastic optimization for eliminating the negative effect of the stochastic gradient noise.
1 code implementation • 16 Jul 2023 • Xingrong Dong, Zhaoxian Wu, Qing Ling, Zhi Tian
But we prove that, even with a class of state-of-the-art robust aggregation rules, in an adversarial environment and in the presence of Byzantine participants, distributed online gradient descent can only achieve a linear adversarial regret bound, which is tight.
1 code implementation • 20 Nov 2022 • Jintang Li, Jiaying Peng, Liang Chen, Zibin Zheng, TingTing Liang, Qing Ling
In this work, we seek to address these challenges and propose Spectral Adversarial Training (SAT), a simple yet effective adversarial training approach for GNNs.
no code implementations • 14 Jun 2022 • Quan Xiao, Qing Ling, Tianyi Chen
A major challenge of applying zeroth-order (ZO) methods is the high query complexity, especially when queries are costly.
no code implementations • 30 May 2022 • Bin Wang, Jun Fang, Hongbin Li, Xiaojun Yuan, Qing Ling
Most studies on FL consider a centralized framework, in which a single server is endowed with a central authority to coordinate a number of devices to perform model training in an iterative manner.
1 code implementation • 29 Apr 2022 • Heng Zhu, Qing Ling
We analyze the trade-off between privacy preservation and learning performance, and show that the influence of our proposed DP mechanisms is decoupled with that of robust stochastic model aggregation.
no code implementations • 13 Jun 2021 • Feng Lin, Weiyu Li, Qing Ling
This paper aims to solve a distributed learning problem under Byzantine attacks.
1 code implementation • 14 Apr 2021 • Heng Zhu, Qing Ling
Communication between workers and the master node to collect local stochastic gradients is a key bottleneck in a large-scale federated learning system.
2 code implementations • 17 Sep 2020 • Jie Peng, Zhaoxian Wu, Qing Ling, Tianyi Chen
We prove that the proposed method reaches a neighborhood of the optimal solution at a linear convergence rate and the learning error is determined by the number of Byzantine workers.
1 code implementation • 12 May 2020 • Jie Peng, Weiyu Li, Qing Ling
In this paper, we consider the Byzantine-robust stochastic optimization problem defined over decentralized static and time-varying networks, where the agents collaboratively minimize the summation of expectations of stochastic local cost functions, but some of the agents are unreliable due to data corruptions, equipment failures or cyber-attacks.
no code implementations • ACL 2020 • Kun Li, Chengbo Chen, Xiaojun Quan, Qing Ling, Yan Song
In this paper, we formulate the data augmentation as a conditional generation task: generating a new sentence while preserving the original opinion targets and labels.
no code implementations • 22 Mar 2020 • Deyin Liu, Xu Chen, Zhi Zhou, Qing Ling
We develop a novel \textit{hybrid parallelism} method, which is the key to HierTrain, to adaptively assign the DNN model layers and the data samples across the three levels of edge device, edge server and cloud center.
no code implementations • 29 Dec 2019 • Zhaoxian Wu, Qing Ling, Tianyi Chen, Georgios B. Giannakis
This motivates us to reduce the variance of stochastic gradients as a means of robustifying SGD in the presence of Byzantine attacks.
no code implementations • 19 Oct 2019 • Wenjun Yan, Qing Ling, Limin Zhang
We apply the latest advances in machine learning with deep neural networks to the tasks of radio modulation recognition, channel coding recognition, and spectrum monitoring.
no code implementations • 15 Sep 2019 • Weiyu Li, Yaohua Liu, Zhi Tian, Qing Ling
COLA is proven to be convergent when the local cost functions have Lipschitz continuous gradients and the censoring threshold is summable.
1 code implementation • 9 Sep 2019 • Weiyu Li, Tianyi Chen, Liping Li, Zhaoxian Wu, Qing Ling
Specifically, in CSGD, the latest mini-batch stochastic gradient at a worker will be transmitted to the server if and only if it is sufficiently informative.
no code implementations • NeurIPS 2018 • Xiaohan Wei, Hao Yu, Qing Ling, Michael Neely
In this paper, we show that by leveraging a local error bound condition on the dual function, the proposed algorithm can achieve a better primal convergence time of $\mathcal{O}\l(\varepsilon^{-2/(2+\beta)}\log_2(\varepsilon^{-1})\r)$, where $\beta\in(0, 1]$ is a local error bound parameter.
no code implementations • 15 Nov 2018 • Shuheng Shen, Linli Xu, Jingchang Liu, Junliang Guo, Qing Ling
Composition optimization has drawn a lot of attention in a wide variety of machine learning domains from risk management to reinforcement learning.
1 code implementation • 9 Nov 2018 • Liping Li, Wei Xu, Tianyi Chen, Georgios B. Giannakis, Qing Ling
In this paper, we propose a class of robust stochastic subgradient methods for distributed learning from heterogeneous datasets at presence of an unknown number of Byzantine workers.
2 code implementations • 29 Aug 2018 • Xiaofeng Zhang, Zhangyang Wang, Dong Liu, Qing Ling
Given insufficient data, while many techniques have been developed to help combat overfitting, the challenge remains if one tries to train deep networks, especially in the ill-posed extremely low data regimes: only a small set of labeled data are available, and nothing -- including unlabeled data -- else.
no code implementations • 14 Jan 2017 • Tianyi Chen, Qing Ling, Georgios B. Giannakis
Performance of an online algorithm in this setting is assessed by: i) the difference of its losses relative to the best dynamic solution with one-slot-ahead information of the loss function and the constraint (that is here termed dynamic regret); and, ii) the accumulated amount of constraint violations (that is here termed dynamic fit).
no code implementations • 14 Aug 2016 • Zhangyang Wang, Shiyu Chang, Qing Ling, Shuai Huang, Xia Hu, Honghui Shi, Thomas S. Huang
With the agreement of my coauthors, I Zhangyang Wang would like to withdraw the manuscript "Stacked Approximated Regression Machine: A Simple Deep Learning Approach".
no code implementations • CVPR 2016 • Zhangyang Wang, Ding Liu, Shiyu Chang, Qing Ling, Yingzhen Yang, Thomas S. Huang
In this paper, we design a Deep Dual-Domain (D3) based fast restoration model to remove artifacts of JPEG compressed images.
no code implementations • 21 May 2016 • Yitan Li, Linli Xu, Xiaowei Zhong, Qing Ling
Asynchronous parallel optimization algorithms for solving large-scale machine learning problems have drawn significant attention from academia to industry recently.
no code implementations • 6 Apr 2016 • Zhangyang Wang, Yingzhen Yang, Shiyu Chang, Qing Ling, Thomas S. Huang
We investigate the $\ell_\infty$-constrained representation which demonstrates robustness to quantization errors, utilizing the tool of deep learning.
no code implementations • 16 Jan 2016 • Zhangyang Wang, Ding Liu, Shiyu Chang, Qing Ling, Yingzhen Yang, Thomas S. Huang
In this paper, we design a Deep Dual-Domain ($\mathbf{D^3}$) based fast restoration model to remove artifacts of JPEG compressed images.
no code implementations • 1 Sep 2015 • Zhangyang Wang, Qing Ling, Thomas S. Huang
We study the $\ell_0$ sparse approximation problem with the tool of deep learning, by proposing Deep $\ell_0$ Encoders.
no code implementations • 30 Mar 2015 • Georgios B. Giannakis, Qing Ling, Gonzalo Mateos, Ioannis D. Schizas, Hao Zhu
This chapter deals with decentralized learning algorithms for in-network processing of graph-valued data.
no code implementations • 24 Apr 2014 • Wei Shi, Qing Ling, Gang Wu, Wotao Yin
In this paper, we develop a decentralized algorithm for the consensus optimization problem $$\min\limits_{x\in\mathbb{R}^p}~\bar{f}(x)=\frac{1}{n}\sum\limits_{i=1}^n f_i(x),$$ which is defined over a connected network of $n$ agents, where each function $f_i$ is held privately by agent $i$ and encodes the agent's data and objective.
Optimization and Control