Two pretrained neural networks are deemed equivalent if they yield similar outputs for the same inputs.
In this paper, we propose an approach for analyzing control systems with respect to their tolerance against environmental perturbations.
We outline a class of threat models under which adversaries can perturb system transitions, constrained by an $\varepsilon$ ball around the original transition probabilities.
Distributed protocols should be robust to both benign malfunction (e. g. packet loss or delay) and attacks (e. g. message replay) from internal or external adversaries.
Cryptography and Security Formal Languages and Automata Theory
In this paper, we propose several metrics to measure robustness of classifiers to natural adversarial examples, and methods to evaluate them.
In this paper, we introduce the problem of synthesizing a property-preserving platform mapping: A set of implementation decisions ensuring that a desired property is preserved from a high-level design into a low-level platform implementation.
We develop three algorithms for solving this problem: (1) the PTAP algorithm, which transforms a set of input-output traces into an incomplete Moore machine and then completes the machine with self-loops; (2) the PRPNI algorithm, which uses the well-known RPNI algorithm for automata learning to learn a product of automata encoding a Moore machine; and (3) the MooreMI algorithm, which directly learns a Moore machine using PTAP extended with state merging.