The duality structure gradient descent algorithm: analysis and applications to neural networks

1 Aug 2017  ·  Thomas Flynn ·

The training of deep neural networks is typically carried out using some form of gradient descent, often with great success. However, existing non-asymptotic analyses of first-order optimization algorithms typically employ a gradient smoothness assumption that is too strong to be applicable in the case of deep neural networks. To address this, we propose an algorithm named duality structure gradient descent (DSGD) that is amenable to non-asymptotic performance analysis, under mild assumptions on the training set and network architecture. The algorithm can be viewed as a form of layer-wise coordinate descent, where at each iteration the algorithm chooses one layer of the network to update. The decision of what layer to update is done in a greedy fashion, based on a rigorous lower bound on the improvement of the objective function for each choice of layer. In the analysis, we bound the time required to reach approximate stationary points, in both the deterministic and stochastic settings. The convergence is measured in terms of a parameter-dependent family of norms that is derived from the network architecture and designed to confirm a smoothness-like property on the gradient of the training loss function. We empirically demonstrate the effectiveness of DSGD in several neural network training scenarios.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here