Search Results for author: Yaniv Nemcovsky

Found 4 papers, 2 papers with code

Physical Passive Patch Adversarial Attacks on Visual Odometry Systems

1 code implementation11 Jul 2022 Yaniv Nemcovsky, Matan Jacoby, Alex M. Bronstein, Chaim Baskin

While such perturbations are usually discussed as tailored to a specific input, a universal perturbation can be constructed to alter the model's output on a set of inputs.

Autonomous Navigation Drone navigation +1

Colored Noise Injection for Training Adversarially Robust Neural Networks

no code implementations4 Mar 2020 Evgenii Zheltonozhskii, Chaim Baskin, Yaniv Nemcovsky, Brian Chmiel, Avi Mendelson, Alex M. Bronstein

Even though deep learning has shown unmatched performance on various tasks, neural networks have been shown to be vulnerable to small adversarial perturbations of the input that lead to significant performance degradation.

On the generalization of bayesian deep nets for multi-class classification

no code implementations23 Feb 2020 Yossi Adi, Yaniv Nemcovsky, Alex Schwing, Tamir Hazan

Generalization bounds which assess the difference between the true risk and the empirical risk have been studied extensively.

General Classification Generalization Bounds +1

Smoothed Inference for Adversarially-Trained Models

2 code implementations17 Nov 2019 Yaniv Nemcovsky, Evgenii Zheltonozhskii, Chaim Baskin, Brian Chmiel, Maxim Fishman, Alex M. Bronstein, Avi Mendelson

In this work, we study the application of randomized smoothing as a way to improve performance on unperturbed data as well as to increase robustness to adversarial attacks.

Adversarial Defense

Cannot find the paper you are looking for? You can Submit a new open access paper.