no code implementations • 22 Sep 2021 • Nari Johnson, Sonali Parbhoo, Andrew Slavin Ross, Finale Doshi-Velez
Machine learning models that utilize patient data across time (rather than just the most recent measurements) have increased performance for many risk stratification tasks in the intensive care unit.
1 code implementation • 9 Feb 2021 • Andrew Slavin Ross, Finale Doshi-Velez
In representation learning, there has been recent interest in developing algorithms to disentangle the ground-truth generative factors behind a dataset, and metrics to quantify how fully this occurs.
1 code implementation • 2 Feb 2021 • Andrew Slavin Ross, Nina Chen, Elisa Zhao Hang, Elena L. Glassman, Finale Doshi-Velez
On synthetic datasets, we find performance on this task much more reliably differentiates entangled and disentangled models than baseline approaches.
1 code implementation • 4 Nov 2019 • Andrew Slavin Ross, Weiwei Pan, Leo Anthony Celi, Finale Doshi-Velez
Ensembles depend on diversity for improved performance.
3 code implementations • 10 Jun 2019 • David Rolnick, Priya L. Donti, Lynn H. Kaack, Kelly Kochanski, Alexandre Lacoste, Kris Sankaran, Andrew Slavin Ross, Nikola Milojevic-Dupont, Natasha Jaques, Anna Waldman-Brown, Alexandra Luccioni, Tegan Maharaj, Evan D. Sherwin, S. Karthik Mukkavilli, Konrad P. Kording, Carla Gomes, Andrew Y. Ng, Demis Hassabis, John C. Platt, Felix Creutzig, Jennifer Chayes, Yoshua Bengio
Climate change is one of the greatest challenges facing humanity, and we, as machine learning experts, may wonder how we can help.
1 code implementation • 29 Sep 2018 • Andrew Slavin Ross
In this thesis, we explore the possibility of training machine learning models (with a particular focus on neural networks) using explanations themselves.
2 code implementations • 22 Jun 2018 • Andrew Slavin Ross, Weiwei Pan, Finale Doshi-Velez
There has been growing interest in developing accurate models that can also be explained to humans.
no code implementations • NeurIPS 2018 • Isaac Lage, Andrew Slavin Ross, Been Kim, Samuel J. Gershman, Finale Doshi-Velez
We often desire our models to be interpretable as well as accurate.
1 code implementation • 26 Nov 2017 • Andrew Slavin Ross, Finale Doshi-Velez
Deep neural networks have proven remarkably effective at solving many classification problems, but have been criticized recently for two major weaknesses: the reasons behind their predictions are uninterpretable, and the predictions themselves can often be fooled by small adversarial perturbations.
1 code implementation • 10 Mar 2017 • Andrew Slavin Ross, Michael C. Hughes, Finale Doshi-Velez
Neural networks are among the most accurate supervised learning methods in use today, but their opacity makes them difficult to trust in critical applications, especially when conditions in training differ from those in test.