no code implementations • 7 Jun 2017 • Yuriy Kochura, Sergii Stirenko, Anis Rojbi, Oleg Alienin, Michail Novotarskiy, Yuri Gordienko
The basic features of some of the most versatile and popular open source frameworks for machine learning (TensorFlow, Deep Learning4j, and H2O) are considered and compared.
no code implementations • 16 Jul 2017 • Yuriy Kochura, Sergii Stirenko, Yuri Gordienko
Deep learning (deep structured learning, hierarchi- cal learning or deep machine learning) is a branch of machine learning based on a set of algorithms that attempt to model high- level abstractions in data by using multiple processing layers with complex structures or otherwise composed of multiple non-linear transformations.
1 code implementation • 29 Aug 2017 • Yuriy Kochura, Sergii Stirenko, Oleg Alienin, Michail Novotarskiy, Yuri Gordienko
The basic features of some of the most versatile and popular open source frameworks for machine learning (TensorFlow, Deep Learning4j, and H2O) are considered and compared.
no code implementations • 30 Dec 2017 • Yuri Gordienko, Sergii Stirenko, Yuriy Kochura, Oleg Alienin, Michail Novotarskiy, Nikita Gordienko
The new method is proposed to monitor the level of current physical load and accumulated fatigue by several objective and subjective characteristics.
1 code implementation • 3 Mar 2018 • Sergii Stirenko, Yuriy Kochura, Oleg Alienin, Oleksandr Rokovyi, Peng Gang, Wei Zeng, Yuri Gordienko
Lossless data augmentation of the segmented dataset leads to the lowest validation loss (without overfitting) and nearly the same accuracy (within the limits of standard deviation) in comparison to the original and other pre-processed datasets after lossy data augmentation.
no code implementations • 5 Jun 2018 • Vlad Taran, Nikita Gordienko, Yuriy Kochura, Yuri Gordienko, Alexandr Rokovyi, Oleg Alienin, Sergii Stirenko
Here the results of application of several deep learning architectures (PSPNet and ICNet) for semantic image segmentation of traffic stereo-pair images are presented.
no code implementations • 31 Dec 2018 • Yuriy Kochura, Yuri Gordienko, Vlad Taran, Nikita Gordienko, Alexandr Rokovyi, Oleg Alienin, Sergii Stirenko
The significant speedup was obtained even for extremely low-scale usage of Google TPUv2 units (8 cores only) in comparison to the quite powerful GPU NVIDIA Tesla K80 card with the speedup up to 10x for training stage (without taking into account the overheads) and speedup up to 2x for prediction stage (with and without taking into account overheads).