Search Results for author: Michael Fromm

Found 10 papers, 4 papers with code

TACAM: Topic And Context Aware Argument Mining

no code implementations26 May 2019 Michael Fromm, Evgeniy Faerman, Thomas Seidl

In previous works, the usual approach is to use a standard search engine to extract text parts which are relevant to the given topic and subsequently use an argument recognition algorithm to select arguments from them.

Argument Mining Knowledge Graphs

Unsupervised Anomaly Detection for X-Ray Images

1 code implementation29 Jan 2020 Diana Davletshina, Valentyn Melnychuk, Viet Tran, Hitansh Singla, Max Berrendorf, Evgeniy Faerman, Michael Fromm, Matthias Schubert

Therefore, we adopt state-of-the-art approaches for unsupervised learning to detect anomalies and show how the outputs of these methods can be explained.

Unsupervised Anomaly Detection

Diversity Aware Relevance Learning for Argument Search

1 code implementation4 Nov 2020 Michael Fromm, Max Berrendorf, Sandra Obermeier, Thomas Seidl, Evgeniy Faerman

In this work, we focus on the problem of retrieving relevant arguments for a query claim covering diverse aspects.

Argument Retrieval Clustering +1

Towards a Holistic View on Argument Quality Prediction

no code implementations19 May 2022 Michael Fromm, Max Berrendorf, Johanna Reiml, Isabelle Mayerhofer, Siddharth Bhargava, Evgeniy Faerman, Thomas Seidl

While there are works on the automated estimation of argument strength, their scope is narrow: they focus on isolated datasets and neglect the interactions with related argument mining tasks, such as argument identification, evidence detection, or emotional appeal.

Argument Mining

Tokenizer Choice For LLM Training: Negligible or Crucial?

no code implementations12 Oct 2023 Mehdi Ali, Michael Fromm, Klaudia Thellmann, Richard Rutmann, Max Lübbering, Johannes Leveling, Katrin Klug, Jan Ebert, Niclas Doll, Jasper Schulze Buschhoff, Charvi Jain, Alexander Arno Weber, Lena Jurkschat, Hammam Abdelwahab, Chelsea John, Pedro Ortiz Suarez, Malte Ostendorff, Samuel Weinbach, Rafet Sifa, Stefan Kesselheim, Nicolas Flores-Herr

The recent success of Large Language Models (LLMs) has been predominantly driven by curating the training dataset composition, scaling of model architectures and dataset sizes and advancements in pretraining objectives, leaving tokenizer influence as a blind spot.

Investigating Multilingual Instruction-Tuning: Do Polyglot Models Demand for Multilingual Instructions?

no code implementations21 Feb 2024 Alexander Arno Weber, Klaudia Thellmann, Jan Ebert, Nicolas Flores-Herr, Jens Lehmann, Michael Fromm, Mehdi Ali

The adaption of multilingual pre-trained Large Language Models (LLMs) into eloquent and helpful assistants is essential to facilitate their use across different language regions.

Instruction Following

Cannot find the paper you are looking for? You can Submit a new open access paper.