Search Results for author: Reto Gubelmann

Found 3 papers, 0 papers with code

Uncovering More Shallow Heuristics: Probing the Natural Language Inference Capacities of Transformer-Based Pre-Trained Language Models Using Syllogistic Patterns

no code implementations19 Jan 2022 Reto Gubelmann, Siegfried Handschuh

In this article, we explore the shallow heuristics used by transformer-based pre-trained language models (PLMs) that are fine-tuned for natural language inference (NLI).

Natural Language Inference

Exploring the Promises of Transformer-Based LMs for the Representation of Normative Claims in the Legal Domain

no code implementations25 Aug 2021 Reto Gubelmann, Peter Hongler, Siegfried Handschuh

In this article, we explore the potential of transformer-based language models (LMs) to correctly represent normative statements in the legal domain, taking tax law as our use case.

Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.