In recent years, considerable progress has been made in the research area of Question Answering (QA) on document images.
We present an approach to quantifying both aleatoric and epistemic uncertainty for deep neural networks in image classification, based on generative adversarial networks (GANs).
Here, we show how such context information can be integrated systematically into a deep neural network-based HAR system.
When deploying deep learning technology in self-driving cars, deep neural networks are constantly exposed to domain shifts.
Word spotting is a popular tool for supporting the first exploration of historic, handwritten document collections.
In recent years, convolutional neural networks (CNNs) took over the field of document analysis and they became the predominant model for word spotting.
Attribute representations became relevant in image recognition and word spotting, providing support under the presence of unbalance and disjoint datasets.
It is shown that the combination of pointwise mutual information and a cosine loss eases the learning process and thus improves the accuracy.
Convolutional Neural Networks have made their mark in various fields of computer vision in recent years.
Furthermore, it will be shown that neuron pruning can be combined with subsequent weight pruning, reducing the size of the LeNet-5 and VGG16 up to $92\%$ and $80\%$ respectively.
Although these approaches could help to build trust in the CNNs predictions, they are only slightly shown to work with medical image data which often poses a challenge as the decision for a class relies on different lesion areas scattered around the entire image.
A novel method for adjusting the network's predictions based on uncertainty information is introduced.
In recent years, deep convolutional neural networks have achieved state of the art performance in various computer vision task such as classification, detection or segmentation.