Search Results for author: Joseph D. Janizek

Found 4 papers, 3 papers with code

True to the Model or True to the Data?

no code implementations29 Jun 2020 Hugh Chen, Joseph D. Janizek, Scott Lundberg, Su-In Lee

Furthermore, we argue that the choice comes down to whether it is desirable to be true to the model or true to the data.

BIG-bench Machine Learning

Explaining Explanations: Axiomatic Feature Interactions for Deep Networks

2 code implementations10 Feb 2020 Joseph D. Janizek, Pascal Sturmfels, Su-In Lee

Integrated Hessians overcomes several theoretical limitations of previous methods to explain interactions, and unlike such previous methods is not limited to a specific architecture or class of neural network.

An Adversarial Approach for the Robust Classification of Pneumonia from Chest Radiographs

1 code implementation13 Jan 2020 Joseph D. Janizek, Gabriel Erion, Alex J. DeGrave, Su-In Lee

In order for these models to be safely deployed, we would like to ensure that they do not use confounding variables to make their classification, and that they will work well even when tested on images from hospitals that were not included in the training data.

General Classification Robust classification

Improving performance of deep learning models with axiomatic attribution priors and expected gradients

3 code implementations ICLR 2020 Gabriel Erion, Joseph D. Janizek, Pascal Sturmfels, Scott Lundberg, Su-In Lee

Recent research has demonstrated that feature attribution methods for deep networks can themselves be incorporated into training; these attribution priors optimize for a model whose attributions have certain desirable properties -- most frequently, that particular features are important or unimportant.

Interpretable Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.