no code implementations • 13 Dec 2024 • Runtao Liu, Chen I Chieh, Jindong Gu, Jipeng Zhang, Renjie Pi, Qifeng Chen, Philip Torr, Ashkan Khakzar, Fabio Pizzati
Using a custom DPO strategy and this dataset, we train safety experts, in the form of low-rank adaptation (LoRA) matrices, able to guide the generation process away from specific safety-related concepts.
no code implementations • 3 Dec 2024 • Yang Zhang, Er Jin, Yanfei Dong, Ashkan Khakzar, Philip Torr, Johannes Stegmaier, Kenji Kawaguchi
Diffusion models have achieved impressive advancements in various vision tasks.
1 code implementation • 9 Nov 2024 • Arshia Hemmat, Adam Davies, Tom A. Lamb, Jianhao Yuan, Philip Torr, Ashkan Khakzar, Francesco Pinto
Despite the importance of shape perception in human vision, early neural image classifiers relied less on shape information for object recognition than other (often spurious) features.
no code implementations • 9 Oct 2024 • Michael Lan, Philip Torr, Austin Meek, Ashkan Khakzar, David Krueger, Fazl Barez
We investigate feature universality in large language models (LLMs), a research field that aims to understand how different models similarly represent concepts in the latent spaces of their intermediate layers.
no code implementations • 11 Aug 2024 • Adam Davies, Ashkan Khakzar
Artificial neural networks have long been understood as "black boxes": though we know their computation graphs and learned parameters, the knowledge encoded by these weights and functions they perform are not inherently interpretable.
no code implementations • 5 Jun 2024 • Razieh Rezaei, Masoud Jalili Sabet, Jindong Gu, Daniel Rueckert, Philip Torr, Ashkan Khakzar
The learned visual prompt, added to any input image would redirect the attention of the pre-trained vision transformer to its spatial location on the image.
1 code implementation • 11 Apr 2024 • Runtao Liu, Ashkan Khakzar, Jindong Gu, Qifeng Chen, Philip Torr, Fabio Pizzati
Hence, we propose Latent Guard, a framework designed to improve safety measures in text-to-image generation.
1 code implementation • 1 Jan 2024 • Razieh Rezaei, Alireza Dizaji, Ashkan Khakzar, Anees Kazi, Nassir Navab, Daniel Rueckert
In this work, we assess attribution methods from a perspective not previously explored in the graph domain: retraining.
1 code implementation • 26 Oct 2023 • Jindong Gu, Xiaojun Jia, Pau de Jorge, Wenqain Yu, Xinwei Liu, Avery Ma, Yuan Xun, Anjun Hu, Ashkan Khakzar, Zhijiang Li, Xiaochun Cao, Philip Torr
This survey explores the landscape of the adversarial transferability of adversarial examples.
no code implementations • 10 Oct 2023 • Yang Zhang, Yawei Li, Hannah Brown, Mina Rezaei, Bernd Bischl, Philip Torr, Ashkan Khakzar, Kenji Kawaguchi
Feature attribution explains neural network outputs by identifying relevant input features.
1 code implementation • 17 Aug 2023 • Yawei Li, Yang Zhang, Kenji Kawaguchi, Ashkan Khakzar, Bernd Bischl, Mina Rezaei
We apply these metrics to mainstream attribution methods, offering a novel lens through which to analyze and compare feature attribution methods.
1 code implementation • 15 Mar 2023 • Ario Sadafi, Oleksandra Adonkina, Ashkan Khakzar, Peter Lienemann, Rudolf Matthias Hehr, Daniel Rueckert, Nassir Navab, Carsten Marr
Explainability is a key requirement for computer-aided diagnosis systems in clinical decision-making.
1 code implementation • 15 Jul 2022 • Matan Atad, Vitalii Dmytrenko, Yitong Li, Xinyue Zhang, Matthias Keicher, Jan Kirschke, Bene Wiestler, Ashkan Khakzar, Nassir Navab
Deep learning models used in medical image analysis are prone to raising reliability concerns due to their black-box nature.
no code implementations • 4 Apr 2022 • Ashkan Khakzar, Yawei Li, Yang Zhang, Mirac Sanisoglu, Seong Tae Kim, Mina Rezaei, Bernd Bischl, Nassir Navab
One challenging property lurking in medical datasets is the imbalanced data distribution, where the frequency of the samples between the different classes is not balanced.
1 code implementation • 30 Mar 2022 • Paul Engstler, Matthias Keicher, David Schinz, Kristina Mach, Alexandra S. Gersing, Sarah C. Foreman, Sophia S. Goller, Juergen Weissinger, Jon Rischewski, Anna-Sophia Dietrich, Benedikt Wiestler, Jan S. Kirschke, Ashkan Khakzar, Nassir Navab
Do black-box neural network models learn clinically relevant features for fracture diagnosis?
no code implementations • 29 Mar 2022 • Matthias Keicher, Kamilia Zaripova, Tobias Czempiel, Kristina Mach, Ashkan Khakzar, Nassir Navab
The automation of chest X-ray reporting has garnered significant interest due to the time-consuming nature of the task.
no code implementations • CVPR 2022 • Ashkan Khakzar, Pedram Khorsandi, Rozhin Nobahari, Nassir Navab
It is a mystery which input features contribute to a neural network's output.
1 code implementation • NeurIPS 2021 • Yang Zhang, Ashkan Khakzar, Yawei Li, Azade Farshad, Seong Tae Kim, Nassir Navab
We propose a method to identify features with predictive information in the input domain.
1 code implementation • 4 Apr 2021 • Ashkan Khakzar, Sabrina Musatian, Jonas Buchberger, Icxel Valeriano Quiroz, Nikolaus Pinger, Soroosh Baselizadeh, Seong Tae Kim, Nassir Navab
We present our findings using publicly available chest pathologies (CheXpert, NIH ChestX-ray8) and COVID-19 datasets (BrixIA, and COVID-19 chest X-ray segmentation dataset).
1 code implementation • 1 Apr 2021 • Ashkan Khakzar, Yang Zhang, Wejdene Mansour, Yuezhi Cai, Yawei Li, Yucheng Zhang, Seong Tae Kim, Nassir Navab
Neural networks have demonstrated remarkable performance in classification and regression tasks on chest X-rays.
2 code implementations • CVPR 2021 • Ashkan Khakzar, Soroosh Baselizadeh, Saurabh Khanduja, Christian Rupprecht, Seong Tae Kim, Nassir Navab
Is critical input information encoded in specific sparse pathways within the neural network?
1 code implementation • 12 Mar 2021 • Seong Tae Kim, Leili Goli, Magdalini Paschali, Ashkan Khakzar, Matthias Keicher, Tobias Czempiel, Egon Burian, Rickmer Braren, Nassir Navab, Thomas Wendler
Chest computed tomography (CT) has played an essential diagnostic role in assessing patients with COVID-19 by showing disease-specific image features such as ground-glass opacity and consolidation.
no code implementations • 1 Jan 2021 • Ashkan Khakzar, Soroosh Baselizadeh, Saurabh Khanduja, Christian Rupprecht, Seong Tae Kim, Nassir Navab
Is critical input information encoded in specific sparse paths within the network?
no code implementations • 1 Dec 2020 • Ashkan Khakzar, Soroosh Baselizadeh, Nassir Navab
In this work, we empirically show that two approaches for handling the gradient information, namely positive aggregation, and positive propagation, break these methods.
1 code implementation • 7 Apr 2020 • Stefan Denner, Ashkan Khakzar, Moiz Sajid, Mahdi Saleh, Ziga Spiclin, Seong Tae Kim, Nassir Navab
Our results show that spatio-temporal information in longitudinal data is a beneficial cue for improving segmentation.
no code implementations • 25 Nov 2019 • Ashkan Khakzar, Soroosh Baselizadeh, Saurabh Khanduja, Christian Rupprecht, Seong Tae Kim, Nassir Navab
Attributing the output of a neural network to the contribution of given input elements is a way of shedding light on the black-box nature of neural networks.
no code implementations • 9 May 2019 • Ashkan Khakzar, Shadi Albarqouni, Nassir Navab
In this work, we propose a method for improving the feature interpretability of neural network classifiers.