Interpretability Techniques for Deep Learning
6 papers with code • 0 benchmarks • 0 datasets
Our results on image and text classification and survival analysis tasks demonstrate that CENs are not only competitive with the state-of-the-art methods but also offer additional insights behind each prediction, that can be valuable for decision support.
Ranked #12 on Sentiment Analysis on IMDb
Explaining deep learning model inferences is a promising venue for scientific understanding, improving safety, uncovering hidden biases, evaluating fairness, and beyond, as argued by many scholars.
Exploration of Interpretability Techniques for Deep COVID-19 Classification using Chest X-ray Images
The mean Micro-F1 score of the models for COVID-19 classifications ranges from 0. 66 to 0. 875, and is 0. 89 for the Ensemble of the network models.
A Semi-supervised Deep Transfer Learning Approach for Rolling-Element Bearing Remaining Useful Life Prediction
Deep learning techniques have recently brought many improvements in the field of neural network training, especially for prognosis and health management.
Modern machine learning systems based on neural networks have shown great success in learning complex data patterns while being able to make good predictions on unseen data points.