Tight Frame Contractions in Deep Networks

ICLR 2021  ·  John Zarka, Florentin Guth, Stéphane Mallat ·

Numerical experiments demonstrate that deep neural networks classifiers progressively separate class distributions around their mean, achieving linear separability. To explain this mechanism, we introduce structured deep network architectures that can be analyzed mathematically and have high classification accuracies on complex image databases. They iterate over tight frame contractions, which apply a pointwise contraction on decomposition coefficients in a tight frame. Tight frame contractions can reduce within-class variabilities while preserving class mean separations and hence improve the Fisher discriminant ratio. Variance reduction bounds are proved for soft-thresholding contractions with Gaussian mixture models. Iterating on tight frame contractions defines deep convolutional network without bias parameters in hidden layers. We show that spatial filters do not need to be learned, and can be defined from wavelet frames. Learning frame contractions along the resulting wavelet scattering channels is sufficient to nearly reach the classification accuracies of VGG-11 and ResNet-18 on ImageNet, with no learned bias.

PDF Abstract
No code implementations yet. Submit your code now



Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.


No methods listed for this paper. Add relevant methods here