Deep Ensembles (DE) are a prominent approach achieving excellent performance on key metrics such as accuracy, calibration, uncertainty estimation, and out-of-distribution detection.
As a result, there is a widespread agreement on the importance of endowing Deep Learning models with explanatory capabilities so that they can themselves provide an answer to why a particular prediction was made.
Predictive uncertainty estimation is essential for deploying Deep Neural Networks in real-world autonomous systems.
However, disentangling the different types and sources of uncertainty is non trivial for most datasets, especially since there is no ground truth for uncertainty.
To this end, this paper will introduce a taxonomy and summary of CAR approaches, a new uncertainty estimation solution for CAR, and a set of experiments on depth accuracy and uncertainty quantification for CAR-based models on KITTI dataset.
Several metrics exist to quantify the similarity between images, but they are inefficient when it comes to measure the similarity of highly distorted images.
It has become critical for deep learning algorithms to quantify their output uncertainties to satisfy reliability constraints and provide accurate results.
Along with predictive performance and runtime speed, reliability is a key requirement for real-world semantic segmentation.
NDA transforms deep features to become more discriminative and, therefore, improves the performances in various tasks.
Using a combination of linear and non-linear procedures is critical for generating a sufficiently deep feature space.
1 code implementation • 24 Apr 2021 • Natalia Díaz-Rodríguez, Alberto Lamas, Jules Sanchez, Gianni Franchi, Ivan Donadello, Siham Tabik, David Filliat, Policarpo Cruz, Rosana Montes, Francisco Herrera
We tackle such problem by considering the symbolic knowledge is expressed in form of a domain expert knowledge graph.
Bayesian neural networks (BNNs) have been long considered an ideal, yet unscalable solution for improving the robustness and the predictive uncertainty of deep neural networks.
Ranked #120 on Image Classification on CIFAR-100
This is due to the fact that modern DNNs are usually uncalibrated and we cannot characterize their epistemic uncertainty.
During training, the weights of a Deep Neural Network (DNN) are optimized from a random initialization towards a nearly optimum value minimizing a loss function.
We propose a novel single-image super-resolution approach based on the geostatistical method of kriging.
We propose a novel approach for pixel classification in hyperspectral images, leveraging on both the spatial and spectral information in the data.