Deep neural networks have shown great success in prediction quality while reliable and robust uncertainty estimation remains a challenge.
We show that this robustness can be partially explained by the calibration behavior of modern CNNs, and may be improved with overconfidence.
In recent years graph neural network (GNN)-based approaches have become a popular strategy for processing point cloud data, regularly achieving state-of-the-art performance on a variety of tasks.
Quantized neural networks (NN) are the common standard to efficiently deploy deep learning models on tiny hardware platforms.
Bayesian neural networks (BNNs) are making significant progress in many research areas where decision-making needs to be accompanied by uncertainty estimation.
In this paper, we adapt the well-established YOLOv3 architecture to generate uncertainty estimations by introducing stochasticity in the form of Monte Carlo Dropout (MC-Drop), and evaluate it across different levels of dataset shift.
The Winograd or Cook-Toom class of algorithms help to reduce the overall compute complexity of many modern deep convolutional neural networks (CNNs).