Search Results for author: Qing Ling

Found 33 papers, 12 papers with code

Single-Timescale Multi-Sequence Stochastic Approximation Without Fixed Point Smoothness: Theories and Applications

no code implementations17 Oct 2024 Yue Huang, Zhaoxian Wu, Shiqian Ma, Qing Ling

Stochastic approximation (SA) that involves multiple coupled sequences, known as multiple-sequence SA (MSSA), finds diverse applications in the fields of signal processing and machine learning.

Bilevel Optimization

Generalization Error Matters in Decentralized Learning Under Byzantine Attacks

1 code implementation11 Jul 2024 Haoxiang Ye, Qing Ling

Recently, decentralized learning has emerged as a popular peer-to-peer signal and information processing paradigm that enables model training across geographically distributed agents in a scalable manner, without the presence of any central server.

Mean Aggregator Is More Robust Than Robust Aggregators Under Label Poisoning Attacks

1 code implementation21 Apr 2024 Jie Peng, Weiyu Li, Qing Ling

Robustness to malicious attacks is of paramount importance for distributed learning.

On the Tradeoff between Privacy Preservation and Byzantine-Robustness in Decentralized Learning

1 code implementation28 Aug 2023 Haoxiang Ye, Heng Zhu, Qing Ling

For a class of state-of-the-art robust aggregation rules, we give unified analysis of the "mixing abilities".

Privacy Preserving

Byzantine-Robust Decentralized Stochastic Optimization with Stochastic Gradient Noise-Independent Learning Error

no code implementations10 Aug 2023 Jie Peng, Weiyu Li, Qing Ling

Motivated by this observation, we introduce two variance reduction methods, stochastic average gradient algorithm (SAGA) and loopless stochastic variance-reduced gradient (LSVRG), to Byzantine-robust decentralized stochastic optimization for eliminating the negative effect of the stochastic gradient noise.

Stochastic Optimization

Byzantine-Robust Distributed Online Learning: Taming Adversarial Participants in An Adversarial Environment

1 code implementation16 Jul 2023 Xingrong Dong, Zhaoxian Wu, Qing Ling, Zhi Tian

But we prove that, even with a class of state-of-the-art robust aggregation rules, in an adversarial environment and in the presence of Byzantine participants, distributed online gradient descent can only achieve a linear adversarial regret bound, which is tight.

Decision Making

Spectral Adversarial Training for Robust Graph Neural Network

1 code implementation20 Nov 2022 Jintang Li, Jiaying Peng, Liang Chen, Zibin Zheng, TingTing Liang, Qing Ling

In this work, we seek to address these challenges and propose Spectral Adversarial Training (SAT), a simple yet effective adversarial training approach for GNNs.

Graph Neural Network

Lazy Queries Can Reduce Variance in Zeroth-order Optimization

no code implementations14 Jun 2022 Quan Xiao, Qing Ling, Tianyi Chen

A major challenge of applying zeroth-order (ZO) methods is the high query complexity, especially when queries are costly.

Confederated Learning: Federated Learning with Decentralized Edge Servers

no code implementations30 May 2022 Bin Wang, Jun Fang, Hongbin Li, Xiaojun Yuan, Qing Ling

Most studies on FL consider a centralized framework, in which a single server is endowed with a central authority to coordinate a number of devices to perform model training in an iterative manner.

Federated Learning Scheduling

Bridging Differential Privacy and Byzantine-Robustness via Model Aggregation

1 code implementation29 Apr 2022 Heng Zhu, Qing Ling

We analyze the trade-off between privacy preservation and learning performance, and show that the influence of our proposed DP mechanisms is decoupled with that of robust stochastic model aggregation.

Federated Learning

Stochastic Alternating Direction Method of Multipliers for Byzantine-Robust Distributed Learning

no code implementations13 Jun 2021 Feng Lin, Weiyu Li, Qing Ling

This paper aims to solve a distributed learning problem under Byzantine attacks.

BROADCAST: Reducing Both Stochastic and Compression Noise to Robustify Communication-Efficient Federated Learning

1 code implementation14 Apr 2021 Heng Zhu, Qing Ling

Communication between workers and the master node to collect local stochastic gradients is a key bottleneck in a large-scale federated learning system.

Federated Learning

Byzantine-Robust Variance-Reduced Federated Learning over Distributed Non-i.i.d. Data

2 code implementations17 Sep 2020 Jie Peng, Zhaoxian Wu, Qing Ling, Tianyi Chen

We prove that the proposed method reaches a neighborhood of the optimal solution at a linear convergence rate and the learning error is determined by the number of Byzantine workers.

Federated Learning

Byzantine-Robust Decentralized Stochastic Optimization over Static and Time-Varying Networks

1 code implementation12 May 2020 Jie Peng, Weiyu Li, Qing Ling

In this paper, we consider the Byzantine-robust stochastic optimization problem defined over decentralized static and time-varying networks, where the agents collaboratively minimize the summation of expectations of stochastic local cost functions, but some of the agents are unreliable due to data corruptions, equipment failures or cyber-attacks.

Stochastic Optimization

Conditional Augmentation for Aspect Term Extraction via Masked Sequence-to-Sequence Generation

no code implementations ACL 2020 Kun Li, Chengbo Chen, Xiaojun Quan, Qing Ling, Yan Song

In this paper, we formulate the data augmentation as a conditional generation task: generating a new sentence while preserving the original opinion targets and labels.

Data Augmentation Extract Aspect +3

HierTrain: Fast Hierarchical Edge AI Learning with Hybrid Parallelism in Mobile-Edge-Cloud Computing

no code implementations22 Mar 2020 Deyin Liu, Xu Chen, Zhi Zhou, Qing Ling

We develop a novel \textit{hybrid parallelism} method, which is the key to HierTrain, to adaptively assign the DNN model layers and the data samples across the three levels of edge device, edge server and cloud center.

Cloud Computing Scheduling

Federated Variance-Reduced Stochastic Gradient Descent with Robustness to Byzantine Attacks

no code implementations29 Dec 2019 Zhaoxian Wu, Qing Ling, Tianyi Chen, Georgios B. Giannakis

This motivates us to reduce the variance of stochastic gradients as a means of robustifying SGD in the presence of Byzantine attacks.

Convolutional Neural Networks for Space-Time Block Coding Recognition

no code implementations19 Oct 2019 Wenjun Yan, Qing Ling, Limin Zhang

We apply the latest advances in machine learning with deep neural networks to the tasks of radio modulation recognition, channel coding recognition, and spectrum monitoring.

BIG-bench Machine Learning

Communication-Censored Linearized ADMM for Decentralized Consensus Optimization

no code implementations15 Sep 2019 Weiyu Li, Yaohua Liu, Zhi Tian, Qing Ling

COLA is proven to be convergent when the local cost functions have Lipschitz continuous gradients and the censoring threshold is summable.

CoLA

Communication-Censored Distributed Stochastic Gradient Descent

1 code implementation9 Sep 2019 Weiyu Li, Tianyi Chen, Liping Li, Zhaoxian Wu, Qing Ling

Specifically, in CSGD, the latest mini-batch stochastic gradient at a worker will be transmitted to the server if and only if it is sufficiently informative.

Quantization Stochastic Optimization

Solving Non-smooth Constrained Programs with Lower Complexity than \mathcal{O}(1/\varepsilon): A Primal-Dual Homotopy Smoothing Approach

no code implementations NeurIPS 2018 Xiaohan Wei, Hao Yu, Qing Ling, Michael Neely

In this paper, we show that by leveraging a local error bound condition on the dual function, the proposed algorithm can achieve a better primal convergence time of $\mathcal{O}\l(\varepsilon^{-2/(2+\beta)}\log_2(\varepsilon^{-1})\r)$, where $\beta\in(0, 1]$ is a local error bound parameter.

Distributed Optimization

Asynchronous Stochastic Composition Optimization with Variance Reduction

no code implementations15 Nov 2018 Shuheng Shen, Linli Xu, Jingchang Liu, Junliang Guo, Qing Ling

Composition optimization has drawn a lot of attention in a wide variety of machine learning domains from risk management to reinforcement learning.

Management Reinforcement Learning

RSA: Byzantine-Robust Stochastic Aggregation Methods for Distributed Learning from Heterogeneous Datasets

1 code implementation9 Nov 2018 Liping Li, Wei Xu, Tianyi Chen, Georgios B. Giannakis, Qing Ling

In this paper, we propose a class of robust stochastic subgradient methods for distributed learning from heterogeneous datasets at presence of an unknown number of Byzantine workers.

DADA: Deep Adversarial Data Augmentation for Extremely Low Data Regime Classification

2 code implementations29 Aug 2018 Xiaofeng Zhang, Zhangyang Wang, Dong Liu, Qing Ling

Given insufficient data, while many techniques have been developed to help combat overfitting, the challenge remains if one tries to train deep networks, especially in the ill-posed extremely low data regimes: only a small set of labeled data are available, and nothing -- including unlabeled data -- else.

Data Augmentation General Classification +2

An Online Convex Optimization Approach to Dynamic Network Resource Allocation

no code implementations14 Jan 2017 Tianyi Chen, Qing Ling, Georgios B. Giannakis

Performance of an online algorithm in this setting is assessed by: i) the difference of its losses relative to the best dynamic solution with one-slot-ahead information of the loss function and the constraint (that is here termed dynamic regret); and, ii) the accumulated amount of constraint violations (that is here termed dynamic fit).

Stacked Approximated Regression Machine: A Simple Deep Learning Approach

no code implementations14 Aug 2016 Zhangyang Wang, Shiyu Chang, Qing Ling, Shuai Huang, Xia Hu, Honghui Shi, Thomas S. Huang

With the agreement of my coauthors, I Zhangyang Wang would like to withdraw the manuscript "Stacked Approximated Regression Machine: A Simple Deep Learning Approach".

Deep Learning regression

D3: Deep Dual-Domain Based Fast Restoration of JPEG-Compressed Images

no code implementations CVPR 2016 Zhangyang Wang, Ding Liu, Shiyu Chang, Qing Ling, Yingzhen Yang, Thomas S. Huang

In this paper, we design a Deep Dual-Domain (D3) based fast restoration model to remove artifacts of JPEG compressed images.

Make Workers Work Harder: Decoupled Asynchronous Proximal Stochastic Gradient Descent

no code implementations21 May 2016 Yitan Li, Linli Xu, Xiaowei Zhong, Qing Ling

Asynchronous parallel optimization algorithms for solving large-scale machine learning problems have drawn significant attention from academia to industry recently.

Learning A Deep $\ell_\infty$ Encoder for Hashing

no code implementations6 Apr 2016 Zhangyang Wang, Yingzhen Yang, Shiyu Chang, Qing Ling, Thomas S. Huang

We investigate the $\ell_\infty$-constrained representation which demonstrates robustness to quantization errors, utilizing the tool of deep learning.

Quantization

$\mathbf{D^3}$: Deep Dual-Domain Based Fast Restoration of JPEG-Compressed Images

no code implementations16 Jan 2016 Zhangyang Wang, Ding Liu, Shiyu Chang, Qing Ling, Yingzhen Yang, Thomas S. Huang

In this paper, we design a Deep Dual-Domain ($\mathbf{D^3}$) based fast restoration model to remove artifacts of JPEG compressed images.

Learning Deep $\ell_0$ Encoders

no code implementations1 Sep 2015 Zhangyang Wang, Qing Ling, Thomas S. Huang

We study the $\ell_0$ sparse approximation problem with the tool of deep learning, by proposing Deep $\ell_0$ Encoders.

Decentralized learning for wireless communications and networking

no code implementations30 Mar 2015 Georgios B. Giannakis, Qing Ling, Gonzalo Mateos, Ioannis D. Schizas, Hao Zhu

This chapter deals with decentralized learning algorithms for in-network processing of graph-valued data.

Spectrum Cartography

EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization

no code implementations24 Apr 2014 Wei Shi, Qing Ling, Gang Wu, Wotao Yin

In this paper, we develop a decentralized algorithm for the consensus optimization problem $$\min\limits_{x\in\mathbb{R}^p}~\bar{f}(x)=\frac{1}{n}\sum\limits_{i=1}^n f_i(x),$$ which is defined over a connected network of $n$ agents, where each function $f_i$ is held privately by agent $i$ and encodes the agent's data and objective.

Optimization and Control

Cannot find the paper you are looking for? You can Submit a new open access paper.