Search Results for author: Bryan Parno

Found 3 papers, 1 papers with code

Degradation Attacks on Certifiably Robust Neural Networks

no code implementations29 Sep 2021 Klas Leino, Chi Zhang, Ravi Mangal, Matt Fredrikson, Bryan Parno, Corina Pasareanu

Certifiably robust neural networks employ provable run-time defenses against adversarial examples by checking if the model is locally robust at the input under evaluation.

valid

Self-Correcting Neural Networks For Safe Classification

1 code implementation23 Jul 2021 Klas Leino, Aymeric Fromherz, Ravi Mangal, Matt Fredrikson, Bryan Parno, Corina Păsăreanu

These constraints relate requirements on the order of the classes output by a classifier to conditions on its input, and are expressive enough to encode various interesting examples of classifier safety specifications from the literature.

Classification

Fast Geometric Projections for Local Robustness Certification

no code implementations ICLR 2021 Aymeric Fromherz, Klas Leino, Matt Fredrikson, Bryan Parno, Corina Păsăreanu

Local robustness ensures that a model classifies all inputs within an $\ell_2$-ball consistently, which precludes various forms of adversarial inputs.

Cannot find the paper you are looking for? You can Submit a new open access paper.