Paper

Comparative Performance Analysis of Neural Networks Architectures on H2O Platform for Various Activation Functions

Deep learning (deep structured learning, hierarchi- cal learning or deep machine learning) is a branch of machine learning based on a set of algorithms that attempt to model high- level abstractions in data by using multiple processing layers with complex structures or otherwise composed of multiple non-linear transformations. In this paper, we present the results of testing neural networks architectures on H2O platform for various activation functions, stopping metrics, and other parameters of machine learning algorithm. It was demonstrated for the use case of MNIST database of handwritten digits in single-threaded mode that blind selection of these parameters can hugely increase (by 2-3 orders) the runtime without the significant increase of precision. This result can have crucial influence for opitmization of available and new machine learning methods, especially for image recognition problems.

Results in Papers With Code
(↓ scroll down to see all results)