Search Results for author: Or Gorodissky

Found 1 papers, 1 papers with code

White-to-Black: Efficient Distillation of Black-Box Adversarial Attacks

1 code implementation NAACL 2019 Yotam Gil, Yoav Chai, Or Gorodissky, Jonathan Berant

Adversarial examples are important for understanding the behavior of neural models, and can improve their robustness through adversarial training.

Efficient Neural Network

Cannot find the paper you are looking for? You can Submit a new open access paper.