Search Results for author: Wenjie Ruan

Found 17 papers, 14 papers with code

Sparse Adversarial Video Attacks with Spatial Transformations

1 code implementation10 Nov 2021 Ronghui Mu, Wenjie Ruan, Leandro Soriano Marcolino, Qiang Ni

In recent years, a significant amount of research efforts concentrated on adversarial attacks on images, while adversarial video attacks have seldom been explored.

Adversarial Attack Bayesian Optimisation +2

Adversarial Robustness of Deep Learning: Theory, Algorithms, and Applications

no code implementations24 Aug 2021 Wenjie Ruan, Xinping Yi, Xiaowei Huang

This tutorial aims to introduce the fundamentals of adversarial robustness of deep learning, presenting a well-structured review of up-to-date techniques to assess the vulnerability of various types of deep learning models to adversarial examples.

Adversarial Robustness Learning Theory

Adversarial Driving: Attacking End-to-End Autonomous Driving Systems

2 code implementations16 Mar 2021 Han Wu, Wenjie Ruan

As the research in deep neural networks advances, deep convolutional networks become feasible for automated driving tasks.

Autonomous Driving

Gradient-Guided Dynamic Efficient Adversarial Training

1 code implementation4 Mar 2021 Fu Wang, Yanghao Zhang, Yanbin Zheng, Wenjie Ruan

Adversarial training is arguably an effective but time-consuming way to train robust deep neural networks that can withstand strong adversarial attacks.

Fooling Object Detectors: Adversarial Attacks by Half-Neighbor Masks

1 code implementation4 Jan 2021 Yanghao Zhang, Fu Wang, Wenjie Ruan

Although there are a great number of adversarial attacks on deep learning based classifiers, how to attack object detection systems has been rarely studied.

Object Detection

Generalizing Universal Adversarial Attacks Beyond Additive Perturbations

2 code implementations15 Oct 2020 Yanghao Zhang, Wenjie Ruan, Fu Wang, Xiaowei Huang

Extensive experiments are conducted on CIFAR-10 and ImageNet datasets with six deep neural network models including GoogleLeNet, VGG16/19, ResNet101/152, and DenseNet121.

Adversarial Attack

Towards the Quantification of Safety Risks in Deep Neural Networks

1 code implementation13 Sep 2020 Peipei Xu, Wenjie Ruan, Xiaowei Huang

In this paper, we define safety risks by requesting the alignment of the network's decision with human perception.

AdaCare: Explainable Clinical Health Status Representation Learning via Scale-Adaptive Feature Extraction and Recalibration

1 code implementation27 Nov 2019 Liantao Ma, Junyi Gao, Yasha Wang, Chaohe Zhang, Jiangtao Wang, Wenjie Ruan, Wen Tang, Xin Gao, Xinyu Ma

It also models the correlation between clinical features to enhance the ones which strongly indicate the health status and thus can maintain a state-of-the-art performance in terms of prediction accuracy while providing qualitative interpretability.

Representation Learning

ConCare: Personalized Clinical Feature Embedding via Capturing the Healthcare Context

1 code implementation27 Nov 2019 Liantao Ma, Chaohe Zhang, Yasha Wang, Wenjie Ruan, Jiantao Wang, Wen Tang, Xinyu Ma, Xin Gao, Junyi Gao

Predicting the patient's clinical outcome from the historical electronic medical records (EMR) is a fundamental research problem in medical informatics.

Coverage Guided Testing for Recurrent Neural Networks

1 code implementation5 Nov 2019 Wei Huang, Youcheng Sun, Xingyu Zhao, James Sharp, Wenjie Ruan, Jie Meng, Xiaowei Huang

The test metrics and test case generation algorithm are implemented into a tool TestRNN, which is then evaluated on a set of LSTM benchmarks.

Defect Detection Drug Discovery +3

A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees

1 code implementation10 Jul 2018 Min Wu, Matthew Wicker, Wenjie Ruan, Xiaowei Huang, Marta Kwiatkowska

In this paper, we study two variants of pointwise robustness, the maximum safe radius problem, which for a given input sample computes the minimum distance to an adversarial example, and the feature robustness problem, which aims to quantify the robustness of individual features to adversarial perturbations.

Adversarial Attack Adversarial Defense +2

Concolic Testing for Deep Neural Networks

2 code implementations30 Apr 2018 Youcheng Sun, Min Wu, Wenjie Ruan, Xiaowei Huang, Marta Kwiatkowska, Daniel Kroening

Concolic testing combines program execution and symbolic analysis to explore the execution paths of a software program.

Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for the $L_0$ Norm

2 code implementations16 Apr 2018 Wenjie Ruan, Min Wu, Youcheng Sun, Xiaowei Huang, Daniel Kroening, Marta Kwiatkowska

In this paper we focus on the $L_0$ norm and aim to compute, for a trained DNN and an input, the maximal radius of a safe norm ball around the input within which there are no adversarial examples.

Cannot find the paper you are looking for? You can Submit a new open access paper.