Search Results for author: Pengfei Yang

Found 11 papers, 4 papers with code

Incremental Satisfiability Modulo Theory for Verification of Deep Neural Networks

no code implementations10 Feb 2023 Pengfei Yang, Zhiming Chi, Zongxin Liu, Mengyu Zhao, Cheng-Chao Huang, Shaowei Cai, Lijun Zhang

Moreover, based on the framework, we propose the multi-objective DNN repair problem and give an algorithm based on our incremental SMT solving algorithm.


Safety Analysis of Autonomous Driving Systems Based on Model Learning

no code implementations23 Nov 2022 Renjue Li, Tianhang Qin, Pengfei Yang, Cheng-Chao Huang, Youcheng Sun, Lijun Zhang

The safety properties proved in the resulting surrogate model apply to the original ADS with a probabilistic guarantee.

Autonomous Driving

Weight Expansion: A New Perspective on Dropout and Generalization

no code implementations23 Jan 2022 Gaojie Jin, Xinping Yi, Pengfei Yang, Lijun Zhang, Sven Schewe, Xiaowei Huang

While dropout is known to be a successful regularization technique, insights into the mechanisms that lead to this success are still lacking.

Ensemble Defense with Data Diversity: Weak Correlation Implies Strong Robustness

no code implementations5 Jun 2021 Renjue Li, Hanwei Zhang, Pengfei Yang, Cheng-Chao Huang, Aimin Zhou, Bai Xue, Lijun Zhang

In this paper, we propose a framework of filter-based ensemble of deep neuralnetworks (DNNs) to defend against adversarial attacks.

Towards Practical Robustness Analysis for DNNs based on PAC-Model Learning

1 code implementation25 Jan 2021 Renjue Li, Pengfei Yang, Cheng-Chao Huang, Youcheng Sun, Bai Xue, Lijun Zhang

It is shown that DeepPAC outperforms the state-of-the-art statistical method PROVERO, and it achieves more practical robustness analysis than the formal verification tool ERAN.

Adversarial Attack DNN Testing

Improving Neural Network Verification through Spurious Region Guided Refinement

1 code implementation15 Oct 2020 Pengfei Yang, Renjue Li, Jianlin Li, Cheng-Chao Huang, Jingyi Wang, Jun Sun, Bai Xue, Lijun Zhang

The core idea is to make use of the obtained constraints of the abstraction to infer new bounds for the neurons.

Asymptotically Optimal One- and Two-Sample Testing with Kernels

no code implementations27 Aug 2019 Shengyu Zhu, Biao Chen, Zhitang Chen, Pengfei Yang

With Sanov's theorem, we derive a sufficient condition for one-sample tests to achieve the optimal error exponent in the universal setting, i. e., for any distribution defining the alternative hypothesis.

Change Detection Test +2

Analyzing Deep Neural Networks with Symbolic Propagation: Towards Higher Precision and Faster Verification

no code implementations26 Feb 2019 Jianlin Li, Pengfei Yang, Jiangchao Liu, Liqian Chen, Xiaowei Huang, Lijun Zhang

Several verification approaches have been developed to automatically prove or disprove safety properties of DNNs.

Universal Hypothesis Testing with Kernels: Asymptotically Optimal Tests for Goodness of Fit

no code implementations21 Feb 2018 Shengyu Zhu, Biao Chen, Pengfei Yang, Zhitang Chen

We show that two classes of Maximum Mean Discrepancy (MMD) based tests attain this optimality on $\mathbb R^d$, while the quadratic-time Kernel Stein Discrepancy (KSD) based tests achieve the maximum exponential decay rate under a relaxed level constraint.

Test Two-sample testing

Cannot find the paper you are looking for? You can Submit a new open access paper.