A Feedforward Network, or a Multilayer Perceptron (MLP), is a neural network with solely densely connected layers. This is the classic neural network architecture of the literature. It consists of inputs $x$ passed through units $h$ (of which there can be many layers) to predict a target $y$. Activation functions are generally chosen to be non-linear to allow for flexible functional approximation.
Image Source: Deep Learning, Goodfellow et al
Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Object Detection | 67 | 9.19% |
Self-Supervised Learning | 62 | 8.50% |
Image Generation | 39 | 5.35% |
Semantic Segmentation | 30 | 4.12% |
Image Classification | 20 | 2.74% |
Disentanglement | 16 | 2.19% |
Face Swapping | 13 | 1.78% |
Language Modelling | 10 | 1.37% |
Image Segmentation | 10 | 1.37% |