Search Results for author: Jonas Fischer

Found 16 papers, 5 papers with code

Disentangling Polysemantic Channels in Convolutional Neural Networks

no code implementations17 Apr 2025 Robin Hesse, Jonas Fischer, Simone Schaub-Meyer, Stefan Roth

Mechanistic interpretability is concerned with analyzing individual components in a (convolutional) neural network (CNN) and how they form larger circuits representing decision mechanisms.

VITAL: More Understandable Feature Visualization through Distribution Alignment and Relevant Information Flow

1 code implementation28 Mar 2025 Ada Gorgun, Bernt Schiele, Jonas Fischer

Feature visualization (FV) is a powerful tool to decode what information neurons are responding to and hence to better understand the reasoning behind such networks.

Decision Making

Unlocking Open-Set Language Accessibility in Vision Models

no code implementations14 Mar 2025 Fawaz Sammani, Jonas Fischer, Nikos Deligiannis

We apply our method on 40 visual classifiers and demonstrate two primary applications: 1) building both label-free and zero-shot concept bottleneck models and therefore converting any classifier to be inherently-interpretable and 2) zero-shot decoding of visual features into natural language.

Now you see me! A framework for obtaining class-relevant saliency maps

no code implementations10 Mar 2025 Nils Philipp Walter, Jilles Vreeken, Jonas Fischer

Neural networks are part of daily-life decision-making, including in high-stakes settings where understanding and transparency are key.

Decision Making

Sailing in high-dimensional spaces: Low-dimensional embeddings through angle preservation

no code implementations14 Jun 2024 Jonas Fischer, Rong Ma

As such, LDEs have to be faithful to the original high-dimensional data, i. e., they should represent the relationships that are encoded in the data, both at a local as well as global scale.

Finding Interpretable Class-Specific Patterns through Efficient Neural Search

no code implementations7 Dec 2023 Nils Philipp Walter, Jonas Fischer, Jilles Vreeken

Discovering patterns in data that best describe the differences between classes allows to hypothesize and reason about class-specific mechanisms.

Understanding and Mitigating Classification Errors Through Interpretable Token Patterns

no code implementations18 Nov 2023 Michael A. Hedderich, Jonas Fischer, Dietrich Klakow, Jilles Vreeken

Characterizing these errors in easily interpretable terms gives insight into whether a classifier is prone to making systematic errors, but also gives a way to act and improve the classifier.

Classification NER +1

Preserving local densities in low-dimensional embeddings

no code implementations31 Jan 2023 Jonas Fischer, Rebekka Burkholz, Jilles Vreeken

We show, however, that these methods fail to reconstruct local properties, such as relative differences in densities (Fig.

Plant 'n' Seek: Can You Find the Winning Ticket?

1 code implementation ICLR 2022 Jonas Fischer, Rebekka Burkholz

The lottery ticket hypothesis has sparked the rapid development of pruning algorithms that aim to reduce the computational costs associated with deep learning during training and model deployment.

Lottery Tickets with Nonzero Biases

no code implementations21 Oct 2021 Jonas Fischer, Advait Gadhikar, Rebekka Burkholz

The strong lottery ticket hypothesis holds the promise that pruning randomly initialized deep neural networks could offer a computationally efficient alternative to deep learning with stochastic gradient descent.

Label-Descriptive Patterns and Their Application to Characterizing Classification Errors

2 code implementations18 Oct 2021 Michael Hedderich, Jonas Fischer, Dietrich Klakow, Jilles Vreeken

Characterizing these errors in easily interpretable terms gives insight into whether a classifier is prone to making systematic errors, but also gives a way to act and improve the classifier.

Descriptive named-entity-recognition +4

Federated Learning from Small Datasets

1 code implementation7 Oct 2021 Michael Kamp, Jonas Fischer, Jilles Vreeken

Federated learning allows multiple parties to collaboratively train a joint model without sharing local data.

Federated Learning

Picking Daisies in Private: Federated Learning from Small Datasets

no code implementations29 Sep 2021 Michael Kamp, Jonas Fischer, Jilles Vreeken

Federated learning allows multiple parties to collaboratively train a joint model without sharing local data.

Federated Learning

Factoring out prior knowledge from low-dimensional embeddings

no code implementations2 Mar 2021 Edith Heiter, Jonas Fischer, Jilles Vreeken

Low-dimensional embedding techniques such as tSNE and UMAP allow visualizing high-dimensional data and therewith facilitate the discovery of interesting structure.

What's in the Box? Exploring the Inner Life of Neural Networks with Robust Rules

no code implementations1 Jan 2021 Jonas Fischer, Anna Oláh, Jilles Vreeken

In particular, we consider activation values of a network for given data, and propose to mine noise-robust rules of the form $X \rightarrow Y$ , where $X$ and $Y$ are sets of neurons in different layers.

Cannot find the paper you are looking for? You can Submit a new open access paper.