no code implementations • 28 May 2023 • Stephan Rabanser, Anvith Thudi, Abhradeep Thakurta, Krishnamurthy Dvijotham, Nicolas Papernot
Training reliable deep learning models which avoid making overconfident but incorrect predictions is a longstanding challenge.
no code implementations • 25 Jul 2022 • Adam Dziedzic, Stephan Rabanser, Mohammad Yaghini, Armin Ale, Murat A. Erdogdu, Nicolas Papernot
We introduce $p$-DkNN, a novel inference procedure that takes a trained deep neural network and analyzes the similarity structures of its intermediate hidden representations to compute $p$-values associated with the end-to-end model prediction.
no code implementations • 29 Jun 2022 • Stephan Rabanser, Tim Januschowski, Kashif Rasul, Oliver Borchert, Richard Kurle, Jan Gasthaus, Michael Bohlke-Schneider, Nicolas Papernot, Valentin Flunkert
We introduce a novel, practically relevant variation of the anomaly detection problem in multi-variate time series: intrinsic anomaly detection.
no code implementations • 26 May 2022 • Stephan Rabanser, Anvith Thudi, Kimia Hamidieh, Adam Dziedzic, Nicolas Papernot
Selective classification is the task of rejecting inputs a model would predict incorrectly on through a trade-off between input space coverage and model accuracy.
no code implementations • 29 Sep 2021 • Stephan Rabanser, Tim Januschowski, Kashif Rasul, Oliver Borchert, Richard Kurle, Jan Gasthaus, Michael Bohlke-Schneider, Nicolas Papernot, Valentin Flunkert
Modern time series corpora, in particular those coming from sensor-based data, exhibit characteristics that have so far not been adequately addressed in the literature on representation learning for time series.
no code implementations • 20 May 2020 • Stephan Rabanser, Tim Januschowski, Valentin Flunkert, David Salinas, Jan Gasthaus
In particular, we investigate the effectiveness of several forms of data binning, i. e. converting real-valued time series into categorical ones, when combined with feed-forward, recurrent neural networks, and convolution-based sequence models.
1 code implementation • NeurIPS 2019 • Stephan Rabanser, Stephan Günnemann, Zachary C. Lipton
We might hope that when faced with unexpected inputs, well-designed software systems would fire off warnings.
1 code implementation • 29 Nov 2017 • Stephan Rabanser, Oleksandr Shchur, Stephan Günnemann
Tensors are multidimensional arrays of numerical values and therefore generalize matrices to multiple dimensions.