Search Results for author: Peter Stella

Found 2 papers, 2 papers with code

Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values

2 code implementations30 Jun 2022 Zijie J. Wang, Alex Kale, Harsha Nori, Peter Stella, Mark E. Nunnally, Duen Horng Chau, Mihaela Vorvoreanu, Jennifer Wortman Vaughan, Rich Caruana

Machine learning (ML) interpretability techniques can reveal undesirable patterns in data that models exploit to make predictions--potentially causing harms once deployed.

Additive models BIG-bench Machine Learning +1

GAM Changer: Editing Generalized Additive Models with Interactive Visualization

1 code implementation6 Dec 2021 Zijie J. Wang, Alex Kale, Harsha Nori, Peter Stella, Mark Nunnally, Duen Horng Chau, Mihaela Vorvoreanu, Jennifer Wortman Vaughan, Rich Caruana

Recent strides in interpretable machine learning (ML) research reveal that models exploit undesirable patterns in the data to make predictions, which potentially causes harms in deployment.

Additive models Interpretable Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.