A Differential Topological View of Challenges in Learning with Feedforward Neural Networks

26 Nov 2018  ·  Hao Shen ·

Among many unsolved puzzles in theories of Deep Neural Networks (DNNs), there are three most fundamental challenges that highly demand solutions, namely, expressibility, optimisability, and generalisability. Although there have been significant progresses in seeking answers using various theories, e.g. information bottleneck theory, sparse representation, statistical inference, Riemannian geometry, etc., so far there is no single theory that is able to provide solutions to all these challenges. In this work, we propose to engage the theory of differential topology to address the three problems. By modelling the dataset of interest as a smooth manifold, DNNs can be considered as compositions of smooth maps between smooth manifolds. Specifically, our work offers a differential topological view of loss landscape of DNNs, interplay between width and depth in expressibility, and regularisations for generalisability. Finally, in the setting of deep representation learning, we further apply the quotient topology to investigate the architecture of DNNs, which enables to capture nuisance factors in data with respect to a specific learning task.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here