1 code implementation • 20 Nov 2023 • Jiahao Yu, Yuhang Wu, Dong Shu, Mingyu Jin, Xinyu Xing
In the rapidly evolving landscape of artificial intelligence, ChatGPT has been widely used in various applications.
1 code implementation • 19 Sep 2023 • Jiahao Yu, Xingwei Lin, Zheng Yu, Xinyu Xing
Remarkably, GPTFuzz achieves over 90% attack success rates against ChatGPT and Llama-2 models, even with suboptimal initial seed templates.
1 code implementation • NeurIPS 2021 • Wenbo Guo, Xian Wu, Usmann Khan, Xinyu Xing
With the rapid development of deep reinforcement learning (DRL) techniques, there is an increasing need to understand and interpret DRL policies.
no code implementations • 2 May 2021 • Lun Wang, Zaynah Javed, Xian Wu, Wenbo Guo, Xinyu Xing, Dawn Song
Recent research has confirmed the feasibility of backdoor attacks in deep reinforcement learning (RL) systems.
no code implementations • 1 Jan 2021 • Yang Young Lu, Wenbo Guo, Xinyu Xing, William Noble
Saliency methods can make deep neural network predictions more interpretable by identifying a set of critical features in an input sample, such as pixels that contribute most strongly to a prediction made by an image classifier.
no code implementations • ACL 2020 • Xinyu Xing, Xiaosheng Fan, Xiaojun Wan
In this paper, we study the challenging problem of automatic generation of citation texts in scholarly papers.
1 code implementation • 3 Feb 2020 • Yang Lu, Wenbo Guo, Xinyu Xing, William Stafford Noble
Saliency methods can make deep neural network predictions more interpretable by identifying a set of critical features in an input sample, such as pixels that contribute most strongly to a prediction made by an image classifier.
no code implementations • 25 Sep 2019 • Yang Young Lu, Wenbo Guo, Xinyu Xing, William Stafford Noble
In this work, we propose a data-driven technique that uses the distribution-preserving decoys to infer robust saliency scores in conjunction with a pre-trained convolutional neural network classifier and any off-the-shelf saliency method.
1 code implementation • 2 Aug 2019 • Wenbo Guo, Lun Wang, Xinyu Xing, Min Du, Dawn Song
As such, given a deep neural network model and clean input samples, it is very challenging to inspect and determine the existence of a trojan backdoor.
1 code implementation • ACL 2019 • Zi Chai, Xinyu Xing, Xiaojun Wan, Bo Huang
For openQG task, we construct OQGenD, the first dataset as far as we know, and propose a model based on conditional generative adversarial networks and our question evaluation model.
1 code implementation • 25 May 2019 • Yao-Hui Chen, Dongliang Mu, Jun Xu, Zhichuang Sun, Wenbo Shen, Xinyu Xing, Long Lu, Bing Mao
This poor performance is caused by the slow extraction of code coverage information from highly compressed PT traces.
Software Engineering Cryptography and Security
no code implementations • NeurIPS 2018 • Wenbo Guo, Sui Huang, Yunzhe Tao, Xinyu Xing, Lin Lin
The empirical results indicate that our proposed approach not only outperforms the state-of-the-art techniques in explaining individual decisions but also provides users with an ability to discover the vulnerabilities of the target ML models.
no code implementations • 7 Nov 2018 • Wenbo Guo, Sui Huang, Yunzhe Tao, Xinyu Xing, Lin Lin
The empirical results indicate that our proposed approach not only outperforms the state-of-the-art techniques in explaining individual decisions but also provides users with an ability to discover the vulnerabilities of the target ML models.
no code implementations • 16 Jan 2018 • Qinglong Wang, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, Xue Liu, C. Lee Giles
Then we empirically evaluate different recurrent networks for their performance of DFA extraction on all Tomita grammars.
1 code implementation • Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security 2018 • Wenbo Guo, Dongliang Mu5, Jun Xu4, Purui Su6, Gang Wang3, Xinyu Xing
The local interpretable modelis specially designed to (1) handle feature dependency to betterwork with security applications (e. g., binary code analysis); and(2) handle nonlinear local boundaries to boost explanation delity. We evaluate our system using two popular deep learning applica-tions in security (a malware classier, and a function start detectorfor binary reverse-engineering).
no code implementations • 29 Sep 2017 • Qinglong Wang, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, Xue Liu, C. Lee Giles
Rule extraction from black-box models is critical in domains that require model validation before implementation, as can be the case in credit scoring and medical diagnosis.
no code implementations • 23 May 2017 • Wenbo Guo, Kaixuan Zhang, Lin Lin, Sui Huang, Xinyu Xing
Our results indicate that the proposed approach not only outperforms the state-of-the-art technique in explaining individual decisions but also provides users with an ability to discover the vulnerabilities of a learning model.
no code implementations • 5 Dec 2016 • Qinglong Wang, Wenbo Guo, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, Xue Liu, C. Lee Giles
Despite the superior performance of DNNs in these applications, it has been recently shown that these models are susceptible to a particular type of attack that exploits a fundamental flaw in their design.
no code implementations • 2 Nov 2016 • Lannan Luo, Qiang Zeng, Chen Cao, Kai Chen, Jian Liu, Limin Liu, Neng Gao, Min Yang, Xinyu Xing, Peng Liu
We present novel ideas and techniques to resolve the challenges, and have built the first system for symbolic execution of Android Framework.
Cryptography and Security Software Engineering
no code implementations • 6 Oct 2016 • Qinglong Wang, Wenbo Guo, Alexander G. Ororbia II, Xinyu Xing, Lin Lin, C. Lee Giles, Xue Liu, Peng Liu, Gang Xiong
Deep neural networks have proven to be quite effective in a wide variety of machine learning tasks, ranging from improved speech recognition systems to advancing the development of autonomous vehicles.
no code implementations • 5 Oct 2016 • Qinglong Wang, Wenbo Guo, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, C. Lee Giles, Xue Liu
However, after a thorough analysis of the fundamental flaw in DNNs, we discover that the effectiveness of current defenses is limited and, more importantly, cannot provide theoretical guarantees as to their robustness against adversarial sampled-based attacks.