no code implementations • 6 Apr 2023 • Jia-Hong Huang, Modar Alfadly, Bernard Ghanem, Marcel Worring
This work proposes a new method that utilizes semantically related questions, referred to as basic questions, acting as noise to evaluate the robustness of VQA models.
no code implementations • 21 Jun 2020 • Modar Alfadly, Adel Bibi, Emilio Botero, Salman AlSubaihi, Bernard Ghanem
This has incited research on the reaction of DNNs to noisy input, namely developing adversarial input attacks and strategies that lead to robust DNNs to these attacks.
no code implementations • ICLR 2020 • Modar Alfadly, Adel Bibi, Muhammed Kocabas, Bernard Ghanem
In this work, we propose a new training regularizer that aims to minimize the probabilistic expected training loss of a DNN subject to a generic Gaussian input.
no code implementations • 30 Nov 2019 • Jia-Hong Huang, Modar Alfadly, Bernard Ghanem, Marcel Worring
In this work, we propose a new method that uses semantically related questions, dubbed basic questions, acting as noise to evaluate the robustness of VQA models.
no code implementations • 25 Sep 2019 • Salman AlSubaihi, Adel Bibi, Modar Alfadly, Abdullah Hamdi, Bernard Ghanem
al. that bounded input intervals can be inexpensively propagated from layer to layer through deep networks.
2 code implementations • 28 May 2019 • Salman Al-Subaihi, Adel Bibi, Modar Alfadly, Abdullah Hamdi, Bernard Ghanem
In this paper, we closely examine the bounds of a block of layers composed in the form of Affine-ReLU-Affine.
1 code implementation • 24 Apr 2019 • Modar Alfadly, Adel Bibi, Bernard Ghanem
Despite the impressive performance of deep neural networks (DNNs) on numerous vision tasks, they still exhibit yet-to-understand uncouth behaviours.
no code implementations • CVPR 2018 • Adel Bibi, Modar Alfadly, Bernard Ghanem
Moreover, we show how these expressions can be used to systematically construct targeted and non-targeted adversarial attacks.
no code implementations • 16 Nov 2017 • Jia-Hong Huang, Cuong Duc Dao, Modar Alfadly, Bernard Ghanem
In VQA, adversarial attacks can target the image and/or the proposed main question and yet there is a lack of proper analysis of the later.
no code implementations • 14 Sep 2017 • Jia-Hong Huang, Cuong Duc Dao, Modar Alfadly, C. Huck Yang, Bernard Ghanem
Visual Question Answering (VQA) models should have both high robustness and accuracy.
no code implementations • 19 Mar 2017 • Jia-Hong Huang, Modar Alfadly, Bernard Ghanem
Given a natural language question about an image, the first module takes the question as input and then outputs the basic questions of the main given question.