This paper presents simple and efficient methods to mitigate sampling bias in active learning while achieving state-of-the-art accuracy and model robustness.
This paper introduces supervised contrastive active learning (SCAL) by leveraging the contrastive loss for active learning in a supervised setting.
In this paper, we propose an approach to improve image captioning solutions for images with novel objects that do not have caption labels in the training dataset.
In this paper, we study the impact of motion blur, a common quality flaw in real world images, on a state-of-the-art two-stage image captioning solution, and notice a degradation in solution performance as blur intensity increases.
This brief sketches initial progress towards a unified energy-based solution for the semi-supervised visual anomaly detection and localization problem.
Obtaining reliable and accurate quantification of uncertainty estimates from deep neural networks is important in safety-critical applications.
We also show that the feature reconstruction error, which is the $L_2$-norm of the difference between the original feature and the pre-image of its embedding, is highly effective for OOD detection and in some cases superior to the log-likelihood scores.
no code implementations • 15 Nov 2020 • Umang Bhatt, Javier Antorán, Yunfeng Zhang, Q. Vera Liao, Prasanna Sattigeri, Riccardo Fogliato, Gabrielle Gauthier Melançon, Ranganath Krishnan, Jason Stanley, Omesh Tickoo, Lama Nachman, Rumi Chunara, Madhulika Srikumar, Adrian Weller, Alice Xiang
Explainability attempts to provide reasons for a machine learning model's behavior to stakeholders.
The "baseline" for the policy-gradients in B-SCST is generated by averaging predictive quality metrics (CIDEr-D) of the captions drawn from the distribution obtained using a Bayesian DNN model.
In the second approach, we use Bayesian deep neural networks trained with mean-field variational inference to estimate model uncertainty associated with the predictions.
We present a principled approach for detecting out-of-distribution (OOD) and adversarial samples in deep neural networks.
We propose MOdel Priors with Empirical Bayes using DNN (MOPED) method to choose informed weight priors in Bayesian neural networks.
The underlying generative models are built from realistic simulation software, wrapped in a Bayesian error model for the gap between simulation outputs and real data.
In the multimodal setting, the proposed framework improved precision-recall AUC by 10. 2% on the subset of MiT dataset as compared to non-Bayesian baseline.
We show that the Bayesian inference applied to DNNs provide reliable confidence measures for visual activity recognition task as compared to conventional DNNs.
We propose a greedy part assignment algorithm that exploits the inherent structure of the human body to achieve a lower complexity, compared to any of the prior published works.