Search Results for author: Andrew Slavin Ross

Found 10 papers, 8 papers with code

Learning Predictive and Interpretable Timeseries Summaries from ICU Data

no code implementations22 Sep 2021 Nari Johnson, Sonali Parbhoo, Andrew Slavin Ross, Finale Doshi-Velez

Machine learning models that utilize patient data across time (rather than just the most recent measurements) have increased performance for many risk stratification tasks in the intensive care unit.

Time Series Time Series Analysis

Benchmarks, Algorithms, and Metrics for Hierarchical Disentanglement

1 code implementation9 Feb 2021 Andrew Slavin Ross, Finale Doshi-Velez

In representation learning, there has been recent interest in developing algorithms to disentangle the ground-truth generative factors behind a dataset, and metrics to quantify how fully this occurs.

Disentanglement

Evaluating the Interpretability of Generative Models by Interactive Reconstruction

1 code implementation2 Feb 2021 Andrew Slavin Ross, Nina Chen, Elisa Zhao Hang, Elena L. Glassman, Finale Doshi-Velez

On synthetic datasets, we find performance on this task much more reliably differentiates entangled and disentangled models than baseline approaches.

Disentanglement

Training Machine Learning Models by Regularizing their Explanations

1 code implementation29 Sep 2018 Andrew Slavin Ross

In this thesis, we explore the possibility of training machine learning models (with a particular focus on neural networks) using explanations themselves.

BIG-bench Machine Learning

Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients

1 code implementation26 Nov 2017 Andrew Slavin Ross, Finale Doshi-Velez

Deep neural networks have proven remarkably effective at solving many classification problems, but have been criticized recently for two major weaknesses: the reasons behind their predictions are uninterpretable, and the predictions themselves can often be fooled by small adversarial perturbations.

Adversarial Robustness

Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations

1 code implementation10 Mar 2017 Andrew Slavin Ross, Michael C. Hughes, Finale Doshi-Velez

Neural networks are among the most accurate supervised learning methods in use today, but their opacity makes them difficult to trust in critical applications, especially when conditions in training differ from those in test.

Cannot find the paper you are looking for? You can Submit a new open access paper.