1 code implementation • 26 Feb 2024 • Saeed Khorram, Mingqi Jiang, Mohamad Shahbazi, Mohamad H. Danesh, Li Fuxin
In the presence of imbalanced multi-class training data, GANs tend to favor classes with more samples, leading to the generation of low-quality and less diverse samples in tail classes.
no code implementations • 13 Dec 2022 • Mingqi Jiang, Saeed Khorram, Li Fuxin
In order to learn better about how different visual recognition backbones make decisions, we propose a methodology that systematically applies deep explanation algorithms on a dataset-wide basis, and compares the statistics generated from the amount and nature of the explanations to gain insights about the decision-making of different models.
no code implementations • CVPR 2022 • Saeed Khorram, Li Fuxin
CounterFactual (CF) visual explanations try to find images similar to the query image that change the decision of a vision system to a specified outcome.
no code implementations • 13 Sep 2021 • Li Fuxin, Zhongang Qi, Saeed Khorram, Vivswan Shitole, Prasad Tadepalli, Minsuk Kahng, Alan Fern
This paper summarizes our endeavors in the past few years in terms of explaining image classifiers, with the aim of including negative results and insights we have gained.
no code implementations • 1 May 2021 • Saeed Khorram, Xiao Fu, Mohamad H. Danesh, Zhongang Qi, Li Fuxin
We prove the convergence of our proposed method and justify its capabilities through experiments in supervised and weakly-supervised settings.
2 code implementations • 31 Dec 2020 • Saeed Khorram, Tyler Lawson, Fuxin Li
In this paper, we present iGOS++, a framework to generate saliency maps that are optimized for altering the output of the black-box system by either removing or preserving only a small fraction of the input.
1 code implementation • 6 Jun 2020 • Mohamad H. Danesh, Anurag Koul, Alan Fern, Saeed Khorram
We introduce an approach for understanding control policies represented as recurrent neural networks.
no code implementations • 10 Apr 2020 • Mohamadreza Jafaryani, Saeed Khorram, Vahid Pourahmadi, Minoo Shahbazi
Detection of such sleep disorders is usually possible by analyzing a number of vital signals that have been collected from the patients.
1 code implementation • 2 May 2019 • Zhongang Qi, Saeed Khorram, Li Fuxin
Understanding and interpreting the decisions made by deep learning models is valuable in many domains.
no code implementations • 15 Sep 2017 • Zhongang Qi, Saeed Khorram, Fuxin Li
The XNN works by learning a nonlinear embedding of a high-dimensional activation vector of a deep network layer into a low-dimensional explanation space while retaining faithfulness i. e., the original deep learning predictions can be constructed from the few concepts extracted by our explanation network.