Search Results for author: Xinyu Xing

Found 22 papers, 8 papers with code

Assessing Prompt Injection Risks in 200+ Custom GPTs

1 code implementation20 Nov 2023 Jiahao Yu, Yuhang Wu, Dong Shu, Mingyu Jin, Xinyu Xing

In the rapidly evolving landscape of artificial intelligence, ChatGPT has been widely used in various applications.

GPTFUZZER: Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts

1 code implementation19 Sep 2023 Jiahao Yu, Xingwei Lin, Zheng Yu, Xinyu Xing

Remarkably, GPTFuzz achieves over 90% attack success rates against ChatGPT and Llama-2 models, even with suboptimal initial seed templates.

EDGE: Explaining Deep Reinforcement Learning Policies

1 code implementation NeurIPS 2021 Wenbo Guo, Xian Wu, Usmann Khan, Xinyu Xing

With the rapid development of deep reinforcement learning (DRL) techniques, there is an increasing need to understand and interpret DRL policies.

MuJoCo Games reinforcement-learning +2

BACKDOORL: Backdoor Attack against Competitive Reinforcement Learning

no code implementations2 May 2021 Lun Wang, Zaynah Javed, Xian Wu, Wenbo Guo, Xinyu Xing, Dawn Song

Recent research has confirmed the feasibility of backdoor attacks in deep reinforcement learning (RL) systems.

Atari Games Backdoor Attack +2

Decoy-enhanced Saliency Maps

no code implementations1 Jan 2021 Yang Young Lu, Wenbo Guo, Xinyu Xing, William Noble

Saliency methods can make deep neural network predictions more interpretable by identifying a set of critical features in an input sample, such as pixels that contribute most strongly to a prediction made by an image classifier.

Automatic Generation of Citation Texts in Scholarly Papers: A Pilot Study

no code implementations ACL 2020 Xinyu Xing, Xiaosheng Fan, Xiaojun Wan

In this paper, we study the challenging problem of automatic generation of citation texts in scholarly papers.

Text Generation

DANCE: Enhancing saliency maps using decoys

1 code implementation3 Feb 2020 Yang Lu, Wenbo Guo, Xinyu Xing, William Stafford Noble

Saliency methods can make deep neural network predictions more interpretable by identifying a set of critical features in an input sample, such as pixels that contribute most strongly to a prediction made by an image classifier.

Adversarial Attack

Robust saliency maps with distribution-preserving decoys

no code implementations25 Sep 2019 Yang Young Lu, Wenbo Guo, Xinyu Xing, William Stafford Noble

In this work, we propose a data-driven technique that uses the distribution-preserving decoys to infer robust saliency scores in conjunction with a pre-trained convolutional neural network classifier and any off-the-shelf saliency method.

Adversarial Attack

TABOR: A Highly Accurate Approach to Inspecting and Restoring Trojan Backdoors in AI Systems

1 code implementation2 Aug 2019 Wenbo Guo, Lun Wang, Xinyu Xing, Min Du, Dawn Song

As such, given a deep neural network model and clean input samples, it is very challenging to inspect and determine the existence of a trojan backdoor.

Anomaly Detection

Asking the Crowd: Question Analysis, Evaluation and Generation for Open Discussion on Online Forums

1 code implementation ACL 2019 Zi Chai, Xinyu Xing, Xiaojun Wan, Bo Huang

For openQG task, we construct OQGenD, the first dataset as far as we know, and propose a model based on conditional generative adversarial networks and our question evaluation model.

Text Generation

PTrix: Efficient Hardware-Assisted Fuzzing for COTS Binary

1 code implementation25 May 2019 Yao-Hui Chen, Dongliang Mu, Jun Xu, Zhichuang Sun, Wenbo Shen, Xinyu Xing, Long Lu, Bing Mao

This poor performance is caused by the slow extraction of code coverage information from highly compressed PT traces.

Software Engineering Cryptography and Security

Explaining Deep Learning Models -- A Bayesian Non-parametric Approach

no code implementations NeurIPS 2018 Wenbo Guo, Sui Huang, Yunzhe Tao, Xinyu Xing, Lin Lin

The empirical results indicate that our proposed approach not only outperforms the state-of-the-art techniques in explaining individual decisions but also provides users with an ability to discover the vulnerabilities of the target ML models.

Explaining Deep Learning Models - A Bayesian Non-parametric Approach

no code implementations7 Nov 2018 Wenbo Guo, Sui Huang, Yunzhe Tao, Xinyu Xing, Lin Lin

The empirical results indicate that our proposed approach not only outperforms the state-of-the-art techniques in explaining individual decisions but also provides users with an ability to discover the vulnerabilities of the target ML models.

A Comparative Study of Rule Extraction for Recurrent Neural Networks

no code implementations16 Jan 2018 Qinglong Wang, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, Xue Liu, C. Lee Giles

Then we empirically evaluate different recurrent networks for their performance of DFA extraction on all Tomita grammars.

Lemna: Explaining deep learning based security applications

1 code implementation Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security 2018 Wenbo Guo, Dongliang Mu5, Jun Xu4, Purui Su6, Gang Wang3, Xinyu Xing

The local interpretable modelis specially designed to (1) handle feature dependency to betterwork with security applications (e. g., binary code analysis); and(2) handle nonlinear local boundaries to boost explanation delity. We evaluate our system using two popular deep learning applica-tions in security (a malware classier, and a function start detectorfor binary reverse-engineering).

An Empirical Evaluation of Rule Extraction from Recurrent Neural Networks

no code implementations29 Sep 2017 Qinglong Wang, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, Xue Liu, C. Lee Giles

Rule extraction from black-box models is critical in domains that require model validation before implementation, as can be the case in credit scoring and medical diagnosis.

Medical Diagnosis

Towards Interrogating Discriminative Machine Learning Models

no code implementations23 May 2017 Wenbo Guo, Kaixuan Zhang, Lin Lin, Sui Huang, Xinyu Xing

Our results indicate that the proposed approach not only outperforms the state-of-the-art technique in explaining individual decisions but also provides users with an ability to discover the vulnerabilities of a learning model.

BIG-bench Machine Learning

Learning Adversary-Resistant Deep Neural Networks

no code implementations5 Dec 2016 Qinglong Wang, Wenbo Guo, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, Xue Liu, C. Lee Giles

Despite the superior performance of DNNs in these applications, it has been recently shown that these models are susceptible to a particular type of attack that exploits a fundamental flaw in their design.

Autonomous Vehicles

Context-aware System Service Call-oriented Symbolic Execution of Android Framework with Application to Exploit Generation

no code implementations2 Nov 2016 Lannan Luo, Qiang Zeng, Chen Cao, Kai Chen, Jian Liu, Limin Liu, Neng Gao, Min Yang, Xinyu Xing, Peng Liu

We present novel ideas and techniques to resolve the challenges, and have built the first system for symbolic execution of Android Framework.

Cryptography and Security Software Engineering

Using Non-invertible Data Transformations to Build Adversarial-Robust Neural Networks

no code implementations6 Oct 2016 Qinglong Wang, Wenbo Guo, Alexander G. Ororbia II, Xinyu Xing, Lin Lin, C. Lee Giles, Xue Liu, Peng Liu, Gang Xiong

Deep neural networks have proven to be quite effective in a wide variety of machine learning tasks, ranging from improved speech recognition systems to advancing the development of autonomous vehicles.

Autonomous Vehicles Dimensionality Reduction +2

Adversary Resistant Deep Neural Networks with an Application to Malware Detection

no code implementations5 Oct 2016 Qinglong Wang, Wenbo Guo, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, C. Lee Giles, Xue Liu

However, after a thorough analysis of the fundamental flaw in DNNs, we discover that the effectiveness of current defenses is limited and, more importantly, cannot provide theoretical guarantees as to their robustness against adversarial sampled-based attacks.

Information Retrieval Malware Detection +3

Cannot find the paper you are looking for? You can Submit a new open access paper.