Performing supervised learning from the data synthesized by using Generative Adversarial Networks (GANs), dubbed GAN-synthetic data, has two important applications.
This paper proposes BRIEF, a backward reduction algorithm that explores compact CNN-model designs from the information flow perspective.
For training fully-connected neural networks (FCNNs), we propose a practical approximate second-order method including: 1) an approximation of the Hessian matrix and 2) a conjugate gradient (CG) based method.
Scale of data and scale of computation infrastructures together enable the current deep learning renaissance.
Deep learning owes its success to three key factors: scale of data, enhanced models to learn representations from data, and scale of computation.
The main component of this architecture is a Lucas-Kanade layer that performs the inverse compositional algorithm on convolutional feature maps.