Search Results for author: Laëtitia Shao

Found 3 papers, 0 papers with code

Understanding Classifiers with Generative Models

no code implementations1 Jan 2021 Laëtitia Shao, Yang song, Stefano Ermon

Although deep neural networks are effective on supervised learning tasks, they have been shown to be brittle.

Two-sample testing

Understanding Classifier Mistakes with Generative Models

no code implementations5 Oct 2020 Laëtitia Shao, Yang song, Stefano Ermon

From this observation, we develop a detection criteria for samples on which a classifier is likely to fail at test time.

Two-sample testing

Neighbourhood Distillation: On the benefits of non end-to-end distillation

no code implementations2 Oct 2020 Laëtitia Shao, Max Moroz, Elad Eban, Yair Movshovitz-Attias

Instead of distilling a model end-to-end, we propose to split it into smaller sub-networks - also called neighbourhoods - that are then trained independently.

Knowledge Distillation Neural Architecture Search

Cannot find the paper you are looking for? You can Submit a new open access paper.