no code implementations • 18 Nov 2020 • Arezoo Rajabi, Rakesh B. Bobba
Here, we propose a method to detect adversarial and out-distribution examples against a pre-trained CNN without needing to retrain the CNN or needing access to a wide variety of fooling examples.
no code implementations • 17 May 2020 • Mahdieh Abbasi, Arezoo Rajabi, Christian Gagne, Rakesh B. Bobba
Using MNIST and CIFAR-10, we empirically verify the ability of our ensemble to detect a large portion of well-known black-box adversarial examples, which leads to a significant reduction in the risk rate of adversaries, at the expense of a small increase in the risk rate of clean samples.
no code implementations • ICLR 2019 • Mahdieh Abbasi, Arezoo Rajabi, Azadeh Sadat Mozafari, Rakesh B. Bobba, Christian Gagné
As an appropriate training set for the extra class, we introduce two resources that are computationally efficient to obtain: a representative natural out-distribution set and interpolated in-distribution samples.
no code implementations • 21 Aug 2018 • Mahdieh Abbasi, Arezoo Rajabi, Azadeh Sadat Mozafari, Rakesh B. Bobba, Christian Gagne
As an appropriate training set for the extra class, we introduce two resources that are computationally efficient to obtain: a representative natural out-distribution set and interpolated in-distribution samples.
no code implementations • 24 Apr 2018 • Mahdieh Abbasi, Arezoo Rajabi, Christian Gagné, Rakesh B. Bobba
Detection and rejection of adversarial examples in security sensitive and safety-critical systems using deep CNNs is essential.