no code implementations • 29 Apr 2024 • Stefan F. Schouten, Peter Bloem, Ilia Markov, Piek Vossen
Recent work has demonstrated that the latent spaces of large language models (LLMs) contain directions predictive of the truth of sentences.
1 code implementation • 23 Oct 2023 • Stefan F. Schouten, Peter Bloem, Ilia Markov, Piek Vossen
But no resources exist to evaluate how well Large Language Models can use explicit reasoning to resolve ambiguity in language.
1 code implementation • 16 Jun 2023 • Stefan F. Schouten, Baran Barbarestani, Wondimagegnhue Tufa, Piek Vossen, Ilia Markov
Given the dynamic nature of toxic language use, automated methods for detecting toxic spans are likely to encounter distributional shift.
1 code implementation • 9 May 2022 • Michael Neely, Stefan F. Schouten, Maurits Bleeker, Ana Lucic
The validity of "attention as explanation" has so far been evaluated by computing the rank correlation between attention-based explanations and existing feature attribution explanations using LSTM-based models.
1 code implementation • 7 May 2021 • Michael Neely, Stefan F. Schouten, Maurits J. R. Bleeker, Ana Lucic
By computing the rank correlation between attention weights and feature-additive explanation methods, previous analyses either invalidate or support the role of attention-based explanations as a faithful and plausible measure of salience.