1 code implementation • 10 Feb 2020 • Muhammad Yaseen, Muneeb Aadil, Maria Sargsyan
Since 2014 when Szegedy et al. showed that carefully designed perturbations of the input can lead Deep Neural Networks (DNNs) to wrongly classify its label, there has been an ongoing research to make DNNs more robust to such malicious perturbations.