Improving Deep Neural Networks with Probabilistic Maxout Units

20 Dec 2013  ·  Jost Tobias Springenberg, Martin Riedmiller ·

We present a probabilistic variant of the recently introduced maxout unit. The success of deep neural networks utilizing maxout can partly be attributed to favorable performance under dropout, when compared to rectified linear units... It however also depends on the fact that each maxout unit performs a pooling operation over a group of linear transformations and is thus partially invariant to changes in its input. Starting from this observation we ask the question: Can the desirable properties of maxout units be preserved while improving their invariance properties ? We argue that our probabilistic maxout (probout) units successfully achieve this balance. We quantitatively verify this claim and report classification performance matching or exceeding the current state of the art on three challenging image classification benchmarks (CIFAR-10, CIFAR-100 and SVHN). read more

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Classification CIFAR-10 DNN+Probabilistic Maxout Percentage correct 90.6 # 140
Image Classification CIFAR-100 DNN+Probabilistic Maxout Percentage correct 61.9 # 130

Methods