Search Results for author: Christian Tomani

Found 8 papers, 5 papers with code

Uncertainty-Based Abstention in LLMs Improves Safety and Reduces Hallucinations

no code implementations16 Apr 2024 Christian Tomani, Kamalika Chaudhuri, Ivan Evtimov, Daniel Cremers, Mark Ibrahim

A major barrier towards the practical deployment of large language models (LLMs) is their lack of reliability.

Question Answering

Beyond In-Domain Scenarios: Robust Density-Aware Calibration

1 code implementation10 Feb 2023 Christian Tomani, Futa Waseda, Yuesong Shen, Daniel Cremers

While existing post-hoc calibration methods achieve impressive results on in-domain test datasets, they are limited by their inability to yield reliable uncertainty estimates in domain-shift and out-of-domain (OOD) scenarios.

What Makes Graph Neural Networks Miscalibrated?

1 code implementation12 Oct 2022 Hans Hao-Hsun Hsu, Yuesong Shen, Christian Tomani, Daniel Cremers

Furthermore, based on the insights from this study, we design a novel calibration method named Graph Attention Temperature Scaling (GATS), which is tailored for calibrating graph neural networks.

Graph Attention Multi-class Classification

CHALLENGER: Training with Attribution Maps

no code implementations30 May 2022 Christian Tomani, Daniel Cremers

Regularization is key in deep learning, especially when training complex models on relatively small datasets.

Time Series Time Series Analysis

Parameterized Temperature Scaling for Boosting the Expressive Power in Post-Hoc Uncertainty Calibration

1 code implementation24 Feb 2021 Christian Tomani, Daniel Cremers, Florian Buettner

We address the problem of uncertainty calibration and introduce a novel calibration method, Parametrized Temperature Scaling (PTS).

Post-hoc Uncertainty Calibration for Domain Drift Scenarios

1 code implementation CVPR 2021 Christian Tomani, Sebastian Gruber, Muhammed Ebrar Erdem, Daniel Cremers, Florian Buettner

First, we show that existing post-hoc calibration methods yield highly over-confident predictions under domain shift.

Towards Trustworthy Predictions from Deep Neural Networks with Fast Adversarial Calibration

1 code implementation20 Dec 2020 Christian Tomani, Florian Buettner

That is, it is crucial for predictive models to be uncertainty-aware and yield well-calibrated (and thus trustworthy) predictions for both in-domain samples as well as under domain shift.

Decision Making

Cannot find the paper you are looking for? You can Submit a new open access paper.