Improving Deep Neural Networks with Probabilistic Maxout Units

20 Dec 2013  ·  Jost Tobias Springenberg, Martin Riedmiller ·

We present a probabilistic variant of the recently introduced maxout unit. The success of deep neural networks utilizing maxout can partly be attributed to favorable performance under dropout, when compared to rectified linear units. It however also depends on the fact that each maxout unit performs a pooling operation over a group of linear transformations and is thus partially invariant to changes in its input. Starting from this observation we ask the question: Can the desirable properties of maxout units be preserved while improving their invariance properties ? We argue that our probabilistic maxout (probout) units successfully achieve this balance. We quantitatively verify this claim and report classification performance matching or exceeding the current state of the art on three challenging image classification benchmarks (CIFAR-10, CIFAR-100 and SVHN).

PDF Abstract

Results from the Paper


Ranked #184 on Image Classification on CIFAR-10 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Image Classification CIFAR-10 DNN+Probabilistic Maxout Percentage correct 90.6 # 184
Image Classification CIFAR-100 DNN+Probabilistic Maxout Percentage correct 61.9 # 184

Methods