Search Results for author: Shangyu Chen

Found 7 papers, 2 papers with code

Storage Efficient and Dynamic Flexible Runtime Channel Pruning via Deep Reinforcement Learning

no code implementations NeurIPS 2020 Jianda Chen, Shangyu Chen, Sinno Jialin Pan

In this paper, we propose a deep reinforcement learning (DRL) based framework to efficiently perform runtime channel pruning on convolutional neural networks (CNNs).

reinforcement-learning Reinforcement Learning (RL)

Defending Adversarial Attacks via Semantic Feature Manipulation

no code implementations3 Feb 2020 Shuo Wang, Tianle Chen, Surya Nepal, Carsten Rudolph, Marthie Grobler, Shangyu Chen

In this paper, we propose a one-off and attack-agnostic Feature Manipulation (FM)-Defense to detect and purify adversarial examples in an interpretable and efficient manner.

General Classification

OIAD: One-for-all Image Anomaly Detection with Disentanglement Learning

no code implementations18 Jan 2020 Shuo Wang, Tianle Chen, Shangyu Chen, Carsten Rudolph, Surya Nepal, Marthie Grobler

Our key insight is that the impact of small perturbation on the latent representation can be bounded for normal samples while anomaly images are usually outside such bounded intervals, referred to as structure consistency.

Anomaly Detection Disentanglement

Backdoor Attacks against Transfer Learning with Pre-trained Deep Learning Models

no code implementations10 Jan 2020 Shuo Wang, Surya Nepal, Carsten Rudolph, Marthie Grobler, Shangyu Chen, Tianle Chen

In this paper, we demonstrate a backdoor threat to transfer learning tasks on both image and time-series data leveraging the knowledge of publicly accessible Teacher models, aimed at defeating three commonly-adopted defenses: \textit{pruning-based}, \textit{retraining-based} and \textit{input pre-processing-based defenses}.

Electrocardiography (ECG) Electroencephalogram (EEG) +3

MetaQuant: Learning to Quantize by Learning to Penetrate Non-differentiable Quantization

1 code implementation NeurIPS 2019 Shangyu Chen, Wenya Wang, Sinno Jialin Pan

However, these methods only heuristically make training-based quantization applicable, without further analysis on how the approximated gradients can assist training of a quantized network.

Quantization

Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon

2 code implementations NeurIPS 2017 Xin Dong, Shangyu Chen, Sinno Jialin Pan

How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems.

Cannot find the paper you are looking for? You can Submit a new open access paper.