1 code implementation • 8 Nov 2024 • Laure Ciernik, Lorenz Linhardt, Marco Morik, Jonas Dippel, Simon Kornblith, Lukas Muttenthaler
Moreover, the correspondence between representational similarities and the models' task behavior is dataset-dependent, being most strongly pronounced for single-domain datasets.
no code implementations • 10 Sep 2024 • Teresa Dorszewski, Lenka Tětková, Lorenz Linhardt, Lars Kai Hansen
Understanding how neural networks align with human cognitive processes is a crucial step toward developing more interpretable and reliable AI systems.
no code implementations • 13 Mar 2024 • Lorenz Linhardt, Marco Morik, Sidney Bender, Naima Elosegui Borras
Diffusion models, trained on large amounts of data, showed remarkable performance for image synthesis.
no code implementations • 12 Apr 2023 • Lorenz Linhardt, Klaus-Robert Müller, Grégoire Montavon
In this paper, we demonstrate that acceptance of explanations by the user is not a guarantee for a machine learning model to be robust against Clever Hans effects, which may remain undetected.
1 code implementation • 2 Nov 2022 • Lukas Muttenthaler, Jonas Dippel, Lorenz Linhardt, Robert A. Vandermeulen, Simon Kornblith
Linear transformations of neural network representations learned from behavioral responses from one dataset substantially improve alignment with human similarity judgments on the other two datasets.
1 code implementation • 3 Feb 2019 • Patrick Schwab, Lorenz Linhardt, Stefan Bauer, Joachim M. Buhmann, Walter Karlen
Estimating what would be an individual's potential response to varying levels of exposure to a treatment is of high practical relevance for several important fields, such as healthcare, economics and public policy.
1 code implementation • ICLR 2019 • Patrick Schwab, Lorenz Linhardt, Walter Karlen
However, current methods for training neural networks for counterfactual inference on observational data are either overly complex, limited to settings with only two available treatments, or both.