Deep neural networks have shown great success in prediction quality while reliable and robust uncertainty estimation remains a challenge.
We show that this robustness can be partially explained by the calibration behavior of modern CNNs, and may be improved with overconfidence.
In recent years graph neural network (GNN)-based approaches have become a popular strategy for processing point cloud data, regularly achieving state-of-the-art performance on a variety of tasks.
In this paper, we adapt the well-established YOLOv3 architecture to generate uncertainty estimations by introducing stochasticity in the form of Monte Carlo Dropout (MC-Drop), and evaluate it across different levels of dataset shift.
The characterisation of the brain as a "connectome", in which the connections are represented by correlational values across timeseries and as summary measures derived from graph theory analyses, has been very popular in the last years.