no code implementations • 9 Feb 2020 • Jiangchao Liu, Liqian Chen, Antoine Mine, Ji Wang
We observe that the robustness radii of correctly classified inputs are much larger than that of misclassified inputs which include adversarial examples, especially those from strong adversarial attacks.
no code implementations • 26 Feb 2019 • Jianlin Li, Pengfei Yang, Jiangchao Liu, Liqian Chen, Xiaowei Huang, Lijun Zhang
Several verification approaches have been developed to automatically prove or disprove safety properties of DNNs.