1 code implementation • 24 Nov 2022 • Xidan Song, Youcheng Sun, Mustafa A. Mustafa, Lucas Cordeiro
We present AIREPAIR, a platform for repairing neural networks.
no code implementations • 23 Nov 2022 • Renjue Li, Tianhang Qin, Pengfei Yang, Cheng-Chao Huang, Youcheng Sun, Lijun Zhang
The safety properties proved in the resulting surrogate model apply to the original ADS with a probabilistic guarantee.
1 code implementation • 5 Aug 2022 • Muhammad Usman, Youcheng Sun, Divya Gopinath, Rishi Dange, Luca Manolache, Corina S. Pasareanu
Deep neural network (DNN) models, including those used in safety-critical domains, need to be thoroughly tested to ensure that they can reliably perform well in different scenarios.
no code implementations • 25 May 2022 • Xiangshan Gao, Xingjun Ma, Jingyi Wang, Youcheng Sun, Bo Li, Shouling Ji, Peng Cheng, Jiming Chen
One desirable property for FL is the implementation of the right to be forgotten (RTBF), i. e., a leaving participant has the right to request to delete its private data from the global model.
no code implementations • 8 May 2022 • Youcheng Sun, Muhammad Usman, Divya Gopinath, Corina S. Păsăreanu
Neural networks are successfully used in a variety of applications, many of them having safety and security concerns.
1 code implementation • 31 Jan 2022 • Muhammad Usman, Youcheng Sun, Divya Gopinath, Corina S. Pasareanu
For correction, we propose an input correction technique that uses a differential analysis to identify the trigger in the detected poisoned images, which is then reset to a neutral color.
1 code implementation • 23 Mar 2021 • Muhammad Usman, Divya Gopinath, Youcheng Sun, Yannic Noller, Corina Pasareanu
We present novel strategies to enable precise yet efficient repair such as inferring correctness specifications to act as oracles for intermediate layer repair, and generation of experts for each class.
no code implementations • ICCV 2021 • Hana Chockler, Daniel Kroening, Youcheng Sun
Existing algorithms for explaining the output of image classifiers perform poorly on inputs where the object of interest is partially occluded.
no code implementations • 27 Feb 2021 • Muhammad Usman, Yannic Noller, Corina Pasareanu, Youcheng Sun, Divya Gopinath
This paper presents NEUROSPF, a tool for the symbolic analysis of neural networks.
1 code implementation • 11 Feb 2021 • Jingyi Wang, Jialuo Chen, Youcheng Sun, Xingjun Ma, Dongxia Wang, Jun Sun, Peng Cheng
A key part of RobOT is a quantitative measurement on 1) the value of each test case in improving model robustness (often via retraining), and 2) the convergence quality of the model robustness improvement.
1 code implementation • 25 Jan 2021 • Renjue Li, Pengfei Yang, Cheng-Chao Huang, Youcheng Sun, Bai Xue, Lijun Zhang
It is shown that DeepPAC outperforms the state-of-the-art statistical method PROVERO, and it achieves more practical robustness analysis than the formal verification tool ERAN.
2 code implementations • NeurIPS 2021 • Hadrien Pouget, Hana Chockler, Youcheng Sun, Daniel Kroening
Policies trained via Reinforcement Learning (RL) are often needlessly complex, making them difficult to analyse and interpret.
no code implementations • 6 Feb 2020 • Youcheng Sun, Yifan Zhou, Simon Maskell, James Sharp, Xiaowei Huang
However, it is unclear if and how the adversarial examples over learning components can affect the overall system-level reliability.
1 code implementation • 5 Nov 2019 • Wei Huang, Youcheng Sun, Xingyu Zhao, James Sharp, Wenjie Ruan, Jie Meng, Xiaowei Huang
The test metrics and test case generation algorithm are implemented into a tool TestRNN, which is then evaluated on a set of LSTM benchmarks.
1 code implementation • 6 Aug 2019 • Youcheng Sun, Hana Chockler, Xiaowei Huang, Daniel Kroening
The black-box nature of deep neural networks (DNNs) makes it impossible to understand why a particular output is produced, creating demand for "Explainable AI".
1 code implementation • 20 Jun 2019 • Wei Huang, Youcheng Sun, Xiaowei Huang, James Sharp
Recurrent neural networks (RNNs) have been widely applied to various sequential tasks such as text processing, video recognition, and molecular property prediction.
no code implementations • 18 Dec 2018 • Xiaowei Huang, Daniel Kroening, Wenjie Ruan, James Sharp, Youcheng Sun, Emese Thamo, Min Wu, Xinping Yi
In the past few years, significant progress has been made on deep neural networks (DNNs) in achieving human-level performance on several long-standing tasks.
2 code implementations • 30 Apr 2018 • Youcheng Sun, Min Wu, Wenjie Ruan, Xiaowei Huang, Marta Kwiatkowska, Daniel Kroening
Concolic testing combines program execution and symbolic analysis to explore the execution paths of a software program.
2 code implementations • 16 Apr 2018 • Wenjie Ruan, Min Wu, Youcheng Sun, Xiaowei Huang, Daniel Kroening, Marta Kwiatkowska
In this paper we focus on the $L_0$ norm and aim to compute, for a trained DNN and an input, the maximal radius of a safe norm ball around the input within which there are no adversarial examples.
no code implementations • 10 Mar 2018 • Youcheng Sun, Xiaowei Huang, Daniel Kroening, James Sharp, Matthew Hill, Rob Ashmore
In this paper, inspired by the MC/DC coverage criterion, we propose a family of four novel test criteria that are tailored to structural features of DNNs and their semantics.