Search Results for author: Joachim Sicking

Found 12 papers, 3 papers with code

Guideline for Trustworthy Artificial Intelligence -- AI Assessment Catalog

no code implementations20 Jun 2023 Maximilian Poretschkin, Anna Schmitz, Maram Akila, Linara Adilova, Daniel Becker, Armin B. Cremers, Dirk Hecker, Sebastian Houben, Michael Mock, Julia Rosenzweig, Joachim Sicking, Elena Schulz, Angelika Voss, Stefan Wrobel

Artificial Intelligence (AI) has made impressive progress in recent years and represents a key technology that has a crucial impact on the economy and society.

A Survey on Uncertainty Toolkits for Deep Learning

no code implementations2 May 2022 Maximilian Pintz, Joachim Sicking, Maximilian Poretschkin, Maram Akila

The success of deep learning (DL) fostered the creation of unifying frameworks such as tensorflow or pytorch as much as it was driven by their creation in return.

Uncertainty Quantification

Patch Shortcuts: Interpretable Proxy Models Efficiently Find Black-Box Vulnerabilities

no code implementations22 Apr 2021 Julia Rosenzweig, Joachim Sicking, Sebastian Houben, Michael Mock, Maram Akila

To address this constraint, we present an approach to detect learned shortcuts using an interpretable-by-design network as a proxy to the black-box model of interest.

Autonomous Driving

Wasserstein Dropout

1 code implementation23 Dec 2020 Joachim Sicking, Maram Akila, Maximilian Pintz, Tim Wirtz, Asja Fischer, Stefan Wrobel

Despite of its importance for safe machine learning, uncertainty quantification for neural networks is far from being solved.

Object Detection regression +1

DenseHMM: Learning Hidden Markov Models by Learning Dense Representations

1 code implementation17 Dec 2020 Joachim Sicking, Maximilian Pintz, Maram Akila, Tim Wirtz

We propose two optimization schemes that make use of this: a modification of the Baum-Welch algorithm and a direct co-occurrence optimization.

Characteristics of Monte Carlo Dropout in Wide Neural Networks

no code implementations10 Jul 2020 Joachim Sicking, Maram Akila, Tim Wirtz, Sebastian Houben, Asja Fischer

Monte Carlo (MC) dropout is one of the state-of-the-art approaches for uncertainty estimation in neural networks (NNs).

Bayesian Inference Gaussian Processes

Cannot find the paper you are looking for? You can Submit a new open access paper.