Search Results for author: Samuel Weinbach

Found 8 papers, 5 papers with code

Efficient Parallelization Layouts for Large-Scale Distributed Model Training

1 code implementation9 Nov 2023 Johannes Hagemann, Samuel Weinbach, Konstantin Dobler, Maximilian Schall, Gerard de Melo

In this work, we conduct a comprehensive ablation study of possible training configurations for large language models.

Tokenizer Choice For LLM Training: Negligible or Crucial?

no code implementations12 Oct 2023 Mehdi Ali, Michael Fromm, Klaudia Thellmann, Richard Rutmann, Max Lübbering, Johannes Leveling, Katrin Klug, Jan Ebert, Niclas Doll, Jasper Schulze Buschhoff, Charvi Jain, Alexander Arno Weber, Lena Jurkschat, Hammam Abdelwahab, Chelsea John, Pedro Ortiz Suarez, Malte Ostendorff, Samuel Weinbach, Rafet Sifa, Stefan Kesselheim, Nicolas Flores-Herr

The recent success of LLMs has been predominantly driven by curating the training dataset composition, scaling of model architectures and dataset sizes and advancements in pretraining objectives, leaving tokenizer influence as a blind spot.

AtMan: Understanding Transformer Predictions Through Memory Efficient Attention Manipulation

1 code implementation NeurIPS 2023 Björn Deiseroth, Mayukh Deb, Samuel Weinbach, Manuel Brack, Patrick Schramowski, Kristian Kersting

Generative transformer models have become increasingly complex, with large numbers of parameters and the ability to process multiple input modalities.

Domain-Level Explainability -- A Challenge for Creating Trust in Superhuman AI Strategies

no code implementations12 Nov 2020 Jonas Andrulis, Ole Meyer, Grégory Schott, Samuel Weinbach, Volker Gruhn

For strategic problems, intelligent systems based on Deep Reinforcement Learning (DRL) have demonstrated an impressive ability to learn advanced solutions that can go far beyond human capabilities, especially when dealing with complex scenarios.

Explainable Artificial Intelligence (XAI)

Cannot find the paper you are looking for? You can Submit a new open access paper.