Search Results for author: Michael Muelly

Found 4 papers, 3 papers with code

Post hoc Explanations may be Ineffective for Detecting Unknown Spurious Correlation

no code implementations ICLR 2022 Julius Adebayo, Michael Muelly, Hal Abelson, Been Kim

We investigate whether three types of post hoc model explanations--feature attribution, concept activation, and training point ranking--are effective for detecting a model's reliance on spurious signals in the training data.

Debugging Tests for Model Explanations

1 code implementation NeurIPS 2020 Julius Adebayo, Michael Muelly, Ilaria Liccardi, Been Kim

For several explanation methods, we assess their ability to: detect spurious correlation artifacts (data contamination), diagnose mislabeled training examples (data contamination), differentiate between a (partially) re-initialized model and a trained one (model contamination), and detect out-of-distribution inputs (test-time contamination).

Generative Modeling for Small-Data Object Detection

1 code implementation ICCV 2019 Lanlan Liu, Michael Muelly, Jia Deng, Tomas Pfister, Li-Jia Li

This paper explores object detection in the small data regime, where only a limited number of annotated bounding boxes are available due to data rarity and annotation expense.

Object object-detection +4

Cannot find the paper you are looking for? You can Submit a new open access paper.