Firstly, the previous definitions of robustness in trajectory prediction are ambiguous.
Moreover, based on the framework, we propose the multi-objective DNN repair problem and give an algorithm based on our incremental SMT solving algorithm.
The safety properties proved in the resulting surrogate model apply to the original ADS with a probabilistic guarantee.
While dropout is known to be a successful regularization technique, insights into the mechanisms that lead to this success are still lacking.
In this paper, we propose a framework of filter-based ensemble of deep neuralnetworks (DNNs) to defend against adversarial attacks.
It is shown that DeepPAC outperforms the state-of-the-art statistical method PROVERO, and it achieves more practical robustness analysis than the formal verification tool ERAN.
The core idea is to make use of the obtained constraints of the abstraction to infer new bounds for the neurons.
With Sanov's theorem, we derive a sufficient condition for one-sample tests to achieve the optimal error exponent in the universal setting, i. e., for any distribution defining the alternative hypothesis.
Several verification approaches have been developed to automatically prove or disprove safety properties of DNNs.
We show that two classes of Maximum Mean Discrepancy (MMD) based tests attain this optimality on $\mathbb R^d$, while the quadratic-time Kernel Stein Discrepancy (KSD) based tests achieve the maximum exponential decay rate under a relaxed level constraint.