no code implementations • 24 Jun 2022 • Mark Huasong Meng, Guangdong Bai, Sin Gee Teo, Zhe Hou, Yan Xiao, Yun Lin, Jin Song Dong
For those reasons, there is a high demand for trustworthy and rigorous methods to verify the robustness of neural network models.
1 code implementation • 2 Apr 2022 • Mark Huasong Meng, Guangdong Bai, Sin Gee Teo, Jin Song Dong
This may be unrealistic in practice, as the data controllers are often reluctant to provide their model consumers with the original data.
no code implementations • 21 Mar 2022 • Yuting Yang, Pei Huang, Juan Cao, Jintao Li, Yun Lin, Jin Song Dong, Feifei Ma, Jian Zhang
Our attack technique targets the inherent vulnerabilities of NLP models, allowing us to generate samples even without interacting with the victim NLP model, as long as it is based on pre-trained language models (PLMs).
no code implementations • 31 Dec 2021 • Xianglin Yang, Yun Lin, Ruofan Liu, Zhenfeng He, Chao Wang, Jin Song Dong, Hong Mei
Moreover, our case study shows that our visual solution can well reflect the characteristics of various training scenarios, showing good potential of DVI as a debugging tool for analyzing deep learning training processes.
no code implementations • 29 Dec 2021 • Guoliang Dong, Jingyi Wang, Jun Sun, Sudipta Chattopadhyay, Xinyu Wang, Ting Dai, Jie Shi, Jin Song Dong
Furthermore, such attacks are impossible to eliminate, i. e., the adversarial perturbation is still possible after applying mitigation methods such as adversarial training.
1 code implementation • 6 Oct 2021 • Yan Xiao, Yun Lin, Ivan Beschastnikh, Changsheng Sun, David S. Rosenblum, Jin Song Dong
However, inputs may deviate from the training dataset distribution in real deployments.
no code implementations • 17 Jul 2021 • Peixin Zhang, Jingyi Wang, Jun Sun, Xinyu Wang, Guoliang Dong, Xingen Wang, Ting Dai, Jin Song Dong
In this work, we bridge the gap by proposing a scalable and effective approach for systematically searching for discriminatory samples while extending existing fairness testing approaches to address a more challenging domain, i. e., text classification.
no code implementations • 14 Nov 2019 • Yizhen Dong, Peixin Zhang, Jingyi Wang, Shuang Liu, Jun Sun, Jianye Hao, Xinyu Wang, Li Wang, Jin Song Dong, Dai Ting
In this work, we conduct an empirical study to evaluate the relationship between coverage, robustness and attack/defense metrics for DNN.
no code implementations • 3 Oct 2019 • Hadrien Bride, Zhe Hou, Jie Dong, Jin Song Dong, Ali Mirjalili
This paper introduces a new classification tool named Silas, which is built to provide a more transparent and dependable data analytics service.
no code implementations • 3 Oct 2019 • Hadrien Bride, Jin Song Dong, Ryan Green, Zhe Hou, Brendan Mahony, Martin Oxenham
We follow the "verification as planning" paradigm and propose to use model checking techniques to solve planning and goal reasoning problems for autonomous systems.
1 code implementation • 22 Sep 2019 • Guoliang Dong, Jingyi Wang, Jun Sun, Yang Zhang, Xinyu Wang, Ting Dai, Jin Song Dong, Xingen Wang
In this work, we propose an approach to extract probabilistic automata for interpreting an important class of neural networks, i. e., recurrent neural networks.