Linear Backpropagation Leads to Faster Convergence

29 Sep 2021  ·  Li Ziang, Yiwen Guo, Haodi Liu, ChangShui Zhang ·

Backpropagation is widely used for calculating gradients in deep neural networks (DNNs). Applied often along with stochastic gradient descent (SGD) or its variants, backpropagation is considered as a de-facto choice in a variety of machine learning tasks including DNN training and adversarial attack/defense. Nevertheless, unlike SGD which has been intensively studied over the past years, backpropagation is somehow overlooked. In this paper, we study the very recent method called ``linear backpropagation'' (LinBP), which modifies the standard backpropagation and can improve the transferability in black-box adversarial attack. By providing theoretical analyses on LinBP in neural-network-involved learning tasks including white-box adversarial attack and model training, we will demonstrate that, somewhat surprisingly, LinBP can lead to faster convergence in these tasks. We will also confirm our theoretical results with extensive experiments.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods