2 code implementations • 11 Apr 2024 • Runtao Liu, Ashkan Khakzar, Jindong Gu, Qifeng Chen, Philip Torr, Fabio Pizzati
Hence, we propose Latent Guard, a framework designed to improve safety measures in text-to-image generation.
1 code implementation • 1 Jan 2024 • Razieh Rezaei, Alireza Dizaji, Ashkan Khakzar, Anees Kazi, Nassir Navab, Daniel Rueckert
In this work, we assess attribution methods from a perspective not previously explored in the graph domain: retraining.
1 code implementation • 26 Oct 2023 • Jindong Gu, Xiaojun Jia, Pau de Jorge, Wenqain Yu, Xinwei Liu, Avery Ma, Yuan Xun, Anjun Hu, Ashkan Khakzar, Zhijiang Li, Xiaochun Cao, Philip Torr
This survey explores the landscape of the adversarial transferability of adversarial examples.
no code implementations • 10 Oct 2023 • Yang Zhang, Yawei Li, Hannah Brown, Mina Rezaei, Bernd Bischl, Philip Torr, Ashkan Khakzar, Kenji Kawaguchi
Feature attribution explains neural network outputs by identifying relevant input features.
no code implementations • 17 Aug 2023 • Yawei Li, Yang Zhang, Kenji Kawaguchi, Ashkan Khakzar, Bernd Bischl, Mina Rezaei
We apply these metrics to mainstream attribution methods, offering a novel lens through which to analyze and compare feature attribution methods.
1 code implementation • 15 Mar 2023 • Ario Sadafi, Oleksandra Adonkina, Ashkan Khakzar, Peter Lienemann, Rudolf Matthias Hehr, Daniel Rueckert, Nassir Navab, Carsten Marr
Explainability is a key requirement for computer-aided diagnosis systems in clinical decision-making.
1 code implementation • 15 Jul 2022 • Matan Atad, Vitalii Dmytrenko, Yitong Li, Xinyue Zhang, Matthias Keicher, Jan Kirschke, Bene Wiestler, Ashkan Khakzar, Nassir Navab
Deep learning models used in medical image analysis are prone to raising reliability concerns due to their black-box nature.
no code implementations • 4 Apr 2022 • Ashkan Khakzar, Yawei Li, Yang Zhang, Mirac Sanisoglu, Seong Tae Kim, Mina Rezaei, Bernd Bischl, Nassir Navab
One challenging property lurking in medical datasets is the imbalanced data distribution, where the frequency of the samples between the different classes is not balanced.
1 code implementation • 30 Mar 2022 • Paul Engstler, Matthias Keicher, David Schinz, Kristina Mach, Alexandra S. Gersing, Sarah C. Foreman, Sophia S. Goller, Juergen Weissinger, Jon Rischewski, Anna-Sophia Dietrich, Benedikt Wiestler, Jan S. Kirschke, Ashkan Khakzar, Nassir Navab
Do black-box neural network models learn clinically relevant features for fracture diagnosis?
no code implementations • 29 Mar 2022 • Matthias Keicher, Kamilia Zaripova, Tobias Czempiel, Kristina Mach, Ashkan Khakzar, Nassir Navab
The automation of chest X-ray reporting has garnered significant interest due to the time-consuming nature of the task.
no code implementations • CVPR 2022 • Ashkan Khakzar, Pedram Khorsandi, Rozhin Nobahari, Nassir Navab
It is a mystery which input features contribute to a neural network's output.
1 code implementation • NeurIPS 2021 • Yang Zhang, Ashkan Khakzar, Yawei Li, Azade Farshad, Seong Tae Kim, Nassir Navab
We propose a method to identify features with predictive information in the input domain.
1 code implementation • 4 Apr 2021 • Ashkan Khakzar, Sabrina Musatian, Jonas Buchberger, Icxel Valeriano Quiroz, Nikolaus Pinger, Soroosh Baselizadeh, Seong Tae Kim, Nassir Navab
We present our findings using publicly available chest pathologies (CheXpert, NIH ChestX-ray8) and COVID-19 datasets (BrixIA, and COVID-19 chest X-ray segmentation dataset).
1 code implementation • 1 Apr 2021 • Ashkan Khakzar, Yang Zhang, Wejdene Mansour, Yuezhi Cai, Yawei Li, Yucheng Zhang, Seong Tae Kim, Nassir Navab
Neural networks have demonstrated remarkable performance in classification and regression tasks on chest X-rays.
2 code implementations • CVPR 2021 • Ashkan Khakzar, Soroosh Baselizadeh, Saurabh Khanduja, Christian Rupprecht, Seong Tae Kim, Nassir Navab
Is critical input information encoded in specific sparse pathways within the neural network?
1 code implementation • 12 Mar 2021 • Seong Tae Kim, Leili Goli, Magdalini Paschali, Ashkan Khakzar, Matthias Keicher, Tobias Czempiel, Egon Burian, Rickmer Braren, Nassir Navab, Thomas Wendler
Chest computed tomography (CT) has played an essential diagnostic role in assessing patients with COVID-19 by showing disease-specific image features such as ground-glass opacity and consolidation.
no code implementations • 1 Jan 2021 • Ashkan Khakzar, Soroosh Baselizadeh, Saurabh Khanduja, Christian Rupprecht, Seong Tae Kim, Nassir Navab
Is critical input information encoded in specific sparse paths within the network?
no code implementations • 1 Dec 2020 • Ashkan Khakzar, Soroosh Baselizadeh, Nassir Navab
In this work, we empirically show that two approaches for handling the gradient information, namely positive aggregation, and positive propagation, break these methods.
1 code implementation • 7 Apr 2020 • Stefan Denner, Ashkan Khakzar, Moiz Sajid, Mahdi Saleh, Ziga Spiclin, Seong Tae Kim, Nassir Navab
Our results show that spatio-temporal information in longitudinal data is a beneficial cue for improving segmentation.
no code implementations • 25 Nov 2019 • Ashkan Khakzar, Soroosh Baselizadeh, Saurabh Khanduja, Christian Rupprecht, Seong Tae Kim, Nassir Navab
Attributing the output of a neural network to the contribution of given input elements is a way of shedding light on the black-box nature of neural networks.
no code implementations • 9 May 2019 • Ashkan Khakzar, Shadi Albarqouni, Nassir Navab
In this work, we propose a method for improving the feature interpretability of neural network classifiers.