no code implementations • 1 Jan 2021 • Laëtitia Shao, Yang song, Stefano Ermon
Although deep neural networks are effective on supervised learning tasks, they have been shown to be brittle.
no code implementations • 2 Oct 2020 • Laëtitia Shao, Max Moroz, Elad Eban, Yair Movshovitz-Attias
Instead of distilling a model end-to-end, we propose to split it into smaller sub-networks - also called neighbourhoods - that are then trained independently.
no code implementations • 5 Oct 2020 • Laëtitia Shao, Yang song, Stefano Ermon
From this observation, we develop a detection criteria for samples on which a classifier is likely to fail at test time.