1 code implementation • 11 Jul 2022 • Yaniv Nemcovsky, Matan Jacoby, Alex M. Bronstein, Chaim Baskin
While such perturbations are usually discussed as tailored to a specific input, a universal perturbation can be constructed to alter the model's output on a set of inputs.
no code implementations • 4 Mar 2020 • Evgenii Zheltonozhskii, Chaim Baskin, Yaniv Nemcovsky, Brian Chmiel, Avi Mendelson, Alex M. Bronstein
Even though deep learning has shown unmatched performance on various tasks, neural networks have been shown to be vulnerable to small adversarial perturbations of the input that lead to significant performance degradation.
no code implementations • 23 Feb 2020 • Yossi Adi, Yaniv Nemcovsky, Alex Schwing, Tamir Hazan
Generalization bounds which assess the difference between the true risk and the empirical risk have been studied extensively.
2 code implementations • 17 Nov 2019 • Yaniv Nemcovsky, Evgenii Zheltonozhskii, Chaim Baskin, Brian Chmiel, Maxim Fishman, Alex M. Bronstein, Avi Mendelson
In this work, we study the application of randomized smoothing as a way to improve performance on unperturbed data as well as to increase robustness to adversarial attacks.