Search Results for author: Moritz Böhle

Found 13 papers, 12 papers with code

B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable

1 code implementation1 Nov 2024 Shreyash Arya, Sukrut Rao, Moritz Böhle, Bernt Schiele

We find that B-cosification can yield models that are on par with B-cos models trained from scratch in terms of interpretability, while often outperforming them in terms of classification performance at a fraction of the training cost.

Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery

1 code implementation19 Jul 2024 Sukrut Rao, Sweta Mahajan, Moritz Böhle, Bernt Schiele

Concept Bottleneck Models (CBMs) have recently been proposed to address the 'black-box' problem of deep neural networks, by first mapping images to a human-understandable concept space and then linearly combining concepts for classification.

Good Teachers Explain: Explanation-Enhanced Knowledge Distillation

1 code implementation5 Feb 2024 Amin Parchami-Araghi, Moritz Böhle, Sukrut Rao, Bernt Schiele

Knowledge Distillation (KD) has proven effective for compressing large teacher models into smaller student models.

Knowledge Distillation

B-cos Alignment for Inherently Interpretable CNNs and Vision Transformers

1 code implementation19 Jun 2023 Moritz Böhle, Navdeeppal Singh, Mario Fritz, Bernt Schiele

We present a new direction for increasing the interpretability of deep neural networks (DNNs) by promoting weight-input alignment during training.

Temperature Schedules for Self-Supervised Contrastive Methods on Long-Tail Data

1 code implementation23 Mar 2023 Anna Kukleva, Moritz Böhle, Bernt Schiele, Hilde Kuehne, Christian Rupprecht

Such a schedule results in a constant `task switching' between an emphasis on instance discrimination and group-wise discrimination and thereby ensures that the model learns both group-wise features, as well as instance-specific details.

Self-Supervised Learning

Better Understanding Differences in Attribution Methods via Systematic Evaluations

1 code implementation21 Mar 2023 Sukrut Rao, Moritz Böhle, Bernt Schiele

Finally, we propose a post-processing smoothing step that significantly improves the performance of some attribution methods, and discuss its applicability.

Fairness

Studying How to Efficiently and Effectively Guide Models with Explanations

1 code implementation ICCV 2023 Sukrut Rao, Moritz Böhle, Amin Parchami-Araghi, Bernt Schiele

To better understand the effectiveness of the various design choices that have been explored in the context of model guidance, in this work we conduct an in-depth evaluation across various loss functions, attribution methods, models, and 'guidance depths' on the PASCAL VOC 2007 and MS COCO 2014 datasets.

Holistically Explainable Vision Transformers

no code implementations20 Jan 2023 Moritz Böhle, Mario Fritz, Bernt Schiele

Transformers increasingly dominate the machine learning landscape across many tasks and domains, which increases the importance for understanding their outputs.

B-cos Networks: Alignment is All We Need for Interpretability

1 code implementation CVPR 2022 Moritz Böhle, Mario Fritz, Bernt Schiele

We present a new direction for increasing the interpretability of deep neural networks (DNNs) by promoting weight-input alignment during training.

Towards Better Understanding Attribution Methods

1 code implementation CVPR 2022 Sukrut Rao, Moritz Böhle, Bernt Schiele

Finally, we propose a post-processing smoothing step that significantly improves the performance of some attribution methods, and discuss its applicability.

Explanation Fidelity Evaluation Image Classification +1

Optimising for Interpretability: Convolutional Dynamic Alignment Networks

1 code implementation27 Sep 2021 Moritz Böhle, Mario Fritz, Bernt Schiele

As a result, CoDA Nets model the classification prediction through a series of input-dependent linear transformations, allowing for linear decomposition of the output into individual input contributions.

Convolutional Dynamic Alignment Networks for Interpretable Classifications

1 code implementation CVPR 2021 Moritz Böhle, Mario Fritz, Bernt Schiele

Given the alignment of the DAUs, the resulting contribution maps align with discriminative input patterns.

Visualizing evidence for Alzheimer's disease in deep neural networks trained on structural MRI data

1 code implementation18 Mar 2019 Moritz Böhle, Fabian Eitel, Martin Weygandt, Kerstin Ritter

In this study, we propose using layer-wise relevance propagation (LRP) to visualize convolutional neural network decisions for AD based on MRI data.

2D Human Pose Estimation Quantitative Methods

Cannot find the paper you are looking for? You can Submit a new open access paper.