no code implementations • 17 Apr 2025 • Robin Hesse, Jonas Fischer, Simone Schaub-Meyer, Stefan Roth
Mechanistic interpretability is concerned with analyzing individual components in a (convolutional) neural network (CNN) and how they form larger circuits representing decision mechanisms.
1 code implementation • 28 Mar 2025 • Ada Gorgun, Bernt Schiele, Jonas Fischer
Feature visualization (FV) is a powerful tool to decode what information neurons are responding to and hence to better understand the reasoning behind such networks.
no code implementations • 14 Mar 2025 • Fawaz Sammani, Jonas Fischer, Nikos Deligiannis
We apply our method on 40 visual classifiers and demonstrate two primary applications: 1) building both label-free and zero-shot concept bottleneck models and therefore converting any classifier to be inherently-interpretable and 2) zero-shot decoding of visual features into natural language.
no code implementations • 10 Mar 2025 • Nils Philipp Walter, Jilles Vreeken, Jonas Fischer
Neural networks are part of daily-life decision-making, including in high-stakes settings where understanding and transparency are key.
no code implementations • 14 Jun 2024 • Jonas Fischer, Rong Ma
As such, LDEs have to be faithful to the original high-dimensional data, i. e., they should represent the relationships that are encoded in the data, both at a local as well as global scale.
1 code implementation • 5 Mar 2024 • Intekhab Hossain, Jonas Fischer, Rebekka Burkholz, John Quackenbush
The practical utility of machine learning models in the sciences often hinges on their interpretability.
no code implementations • 7 Dec 2023 • Nils Philipp Walter, Jonas Fischer, Jilles Vreeken
Discovering patterns in data that best describe the differences between classes allows to hypothesize and reason about class-specific mechanisms.
no code implementations • 18 Nov 2023 • Michael A. Hedderich, Jonas Fischer, Dietrich Klakow, Jilles Vreeken
Characterizing these errors in easily interpretable terms gives insight into whether a classifier is prone to making systematic errors, but also gives a way to act and improve the classifier.
no code implementations • 31 Jan 2023 • Jonas Fischer, Rebekka Burkholz, Jilles Vreeken
We show, however, that these methods fail to reconstruct local properties, such as relative differences in densities (Fig.
1 code implementation • ICLR 2022 • Jonas Fischer, Rebekka Burkholz
The lottery ticket hypothesis has sparked the rapid development of pruning algorithms that aim to reduce the computational costs associated with deep learning during training and model deployment.
no code implementations • 21 Oct 2021 • Jonas Fischer, Advait Gadhikar, Rebekka Burkholz
The strong lottery ticket hypothesis holds the promise that pruning randomly initialized deep neural networks could offer a computationally efficient alternative to deep learning with stochastic gradient descent.
2 code implementations • 18 Oct 2021 • Michael Hedderich, Jonas Fischer, Dietrich Klakow, Jilles Vreeken
Characterizing these errors in easily interpretable terms gives insight into whether a classifier is prone to making systematic errors, but also gives a way to act and improve the classifier.
1 code implementation • 7 Oct 2021 • Michael Kamp, Jonas Fischer, Jilles Vreeken
Federated learning allows multiple parties to collaboratively train a joint model without sharing local data.
no code implementations • 29 Sep 2021 • Michael Kamp, Jonas Fischer, Jilles Vreeken
Federated learning allows multiple parties to collaboratively train a joint model without sharing local data.
no code implementations • 2 Mar 2021 • Edith Heiter, Jonas Fischer, Jilles Vreeken
Low-dimensional embedding techniques such as tSNE and UMAP allow visualizing high-dimensional data and therewith facilitate the discovery of interesting structure.
no code implementations • 1 Jan 2021 • Jonas Fischer, Anna Oláh, Jilles Vreeken
In particular, we consider activation values of a network for given data, and propose to mine noise-robust rules of the form $X \rightarrow Y$ , where $X$ and $Y$ are sets of neurons in different layers.