Search Results for author: Stephan Rabanser

Found 8 papers, 2 papers with code

Training Private Models That Know What They Don't Know

no code implementations28 May 2023 Stephan Rabanser, Anvith Thudi, Abhradeep Thakurta, Krishnamurthy Dvijotham, Nicolas Papernot

Training reliable deep learning models which avoid making overconfident but incorrect predictions is a longstanding challenge.

$p$-DkNN: Out-of-Distribution Detection Through Statistical Testing of Deep Representations

no code implementations25 Jul 2022 Adam Dziedzic, Stephan Rabanser, Mohammad Yaghini, Armin Ale, Murat A. Erdogdu, Nicolas Papernot

We introduce $p$-DkNN, a novel inference procedure that takes a trained deep neural network and analyzes the similarity structures of its intermediate hidden representations to compute $p$-values associated with the end-to-end model prediction.

Autonomous Driving Out-of-Distribution Detection +1

Intrinsic Anomaly Detection for Multi-Variate Time Series

no code implementations29 Jun 2022 Stephan Rabanser, Tim Januschowski, Kashif Rasul, Oliver Borchert, Richard Kurle, Jan Gasthaus, Michael Bohlke-Schneider, Nicolas Papernot, Valentin Flunkert

We introduce a novel, practically relevant variation of the anomaly detection problem in multi-variate time series: intrinsic anomaly detection.

Anomaly Detection Navigate +3

Selective Classification Via Neural Network Training Dynamics

no code implementations26 May 2022 Stephan Rabanser, Anvith Thudi, Kimia Hamidieh, Adam Dziedzic, Nicolas Papernot

Selective classification is the task of rejecting inputs a model would predict incorrectly on through a trade-off between input space coverage and model accuracy.

Classification

Context-invariant, multi-variate time series representations

no code implementations29 Sep 2021 Stephan Rabanser, Tim Januschowski, Kashif Rasul, Oliver Borchert, Richard Kurle, Jan Gasthaus, Michael Bohlke-Schneider, Nicolas Papernot, Valentin Flunkert

Modern time series corpora, in particular those coming from sensor-based data, exhibit characteristics that have so far not been adequately addressed in the literature on representation learning for time series.

Contrastive Learning Representation Learning +2

The Effectiveness of Discretization in Forecasting: An Empirical Study on Neural Time Series Models

no code implementations20 May 2020 Stephan Rabanser, Tim Januschowski, Valentin Flunkert, David Salinas, Jan Gasthaus

In particular, we investigate the effectiveness of several forms of data binning, i. e. converting real-valued time series into categorical ones, when combined with feed-forward, recurrent neural networks, and convolution-based sequence models.

Time Series Time Series Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.