Search Results for author: Dennis Ulmer

Found 16 papers, 11 papers with code

Calibrating Large Language Models Using Their Generations Only

1 code implementation9 Mar 2024 Dennis Ulmer, Martin Gubri, Hwaran Lee, Sangdoo Yun, Seong Joon Oh

As large language models (LLMs) are increasingly deployed in user-facing applications, building trust and maintaining safety by accurately quantifying a model's confidence in its prediction becomes even more important.

Question Answering Text Generation

TRAP: Targeted Random Adversarial Prompt Honeypot for Black-Box Identification

2 code implementations20 Feb 2024 Martin Gubri, Dennis Ulmer, Hwaran Lee, Sangdoo Yun, Seong Joon Oh

Large Language Model (LLM) services and models often come with legal rules on who can use them and how they must use them.

Language Modelling Large Language Model

Non-Exchangeable Conformal Language Generation with Nearest Neighbors

1 code implementation1 Feb 2024 Dennis Ulmer, Chrysoula Zerva, André F. T. Martins

Conformal prediction is an attractive framework to provide predictions imbued with statistical guarantees, however, its application to text generation is challenging since any i. i. d.

Conformal Prediction Language Modelling +2

Bootstrapping LLM-based Task-Oriented Dialogue Agents via Self-Talk

no code implementations10 Jan 2024 Dennis Ulmer, Elman Mansimov, Kaixiang Lin, Justin Sun, Xibin Gao, Yi Zhang

This metric is used to filter the generated conversational data that is fed back in LLM for training.

Non-Exchangeable Conformal Risk Control

1 code implementation2 Oct 2023 António Farinhas, Chrysoula Zerva, Dennis Ulmer, André F. T. Martins

Split conformal prediction has recently sparked great interest due to its ability to provide formally guaranteed uncertainty sets or intervals for predictions made by black-box neural models, ensuring a predefined probability of containing the actual ground truth.

Conformal Prediction Time Series

Uncertainty in Natural Language Generation: From Theory to Applications

no code implementations28 Jul 2023 Joris Baan, Nico Daheim, Evgenia Ilia, Dennis Ulmer, Haau-Sing Li, Raquel Fernández, Barbara Plank, Rico Sennrich, Chrysoula Zerva, Wilker Aziz

Recent advances of powerful Language Models have allowed Natural Language Generation (NLG) to emerge as an important technology that can not only perform traditional tasks like summarisation or translation, but also serve as a natural language interface to a variety of applications.

Active Learning Text Generation

Exploring Predictive Uncertainty and Calibration in NLP: A Study on the Impact of Method & Data Scarcity

1 code implementation20 Oct 2022 Dennis Ulmer, Jes Frellsen, Christian Hardmeier

We investigate the problem of determining the predictive confidence (or, conversely, uncertainty) of a neural classifier through the lens of low-resource languages.

Experimental Standards for Deep Learning in Natural Language Processing Research

1 code implementation13 Apr 2022 Dennis Ulmer, Elisa Bassignana, Max Müller-Eberstein, Daniel Varab, Mike Zhang, Rob van der Goot, Christian Hardmeier, Barbara Plank

The field of Deep Learning (DL) has undergone explosive growth during the last decade, with a substantial impact on Natural Language Processing (NLP) as well.

Prior and Posterior Networks: A Survey on Evidential Deep Learning Methods For Uncertainty Estimation

no code implementations6 Oct 2021 Dennis Ulmer, Christian Hardmeier, Jes Frellsen

Popular approaches for quantifying predictive uncertainty in deep neural networks often involve distributions over weights or multiple models, for instance via Markov Chain sampling, ensembling, or Monte Carlo dropout.

Recoding latent sentence representations -- Dynamic gradient-based activation modification in RNNs

1 code implementation3 Jan 2021 Dennis Ulmer

In Recurrent Neural Networks (RNNs), encoding information in a suboptimal or erroneous way can impact the quality of representations based on later elements in the sequence and subsequently lead to wrong predictions and a worse model performance.

Language Modelling Sentence

Know Your Limits: Uncertainty Estimation with ReLU Classifiers Fails at Reliable OOD Detection

1 code implementation9 Dec 2020 Dennis Ulmer, Giovanni Cinà

A crucial requirement for reliable deployment of deep learning models for safety-critical applications is the ability to identify out-of-distribution (OOD) data points, samples which differ from the training data and on which a model might underperform.

General Classification Out of Distribution (OOD) Detection

Trust Issues: Uncertainty Estimation Does Not Enable Reliable OOD Detection On Medical Tabular Data

1 code implementation6 Nov 2020 Dennis Ulmer, Lotta Meijerink, Giovanni Cinà

When deploying machine learning models in high-stakes real-world environments such as health care, it is crucial to accurately assess the uncertainty concerning a model's prediction on abnormal inputs.

Out of Distribution (OOD) Detection

Assessing incrementality in sequence-to-sequence models

1 code implementation WS 2019 Dennis Ulmer, Dieuwke Hupkes, Elia Bruni

Since their inception, encoder-decoder models have successfully been applied to a wide array of problems in computational linguistics.

Decoder

On the Realization of Compositionality in Neural Networks

no code implementations WS 2019 Joris Baan, Jana Leible, Mitja Nikolaus, David Rau, Dennis Ulmer, Tim Baumgärtner, Dieuwke Hupkes, Elia Bruni

We present a detailed comparison of two types of sequence to sequence models trained to conduct a compositional task.

Cannot find the paper you are looking for? You can Submit a new open access paper.