Search Results for author: Deliang Fan

Found 34 papers, 9 papers with code

MF-NeRF: Memory Efficient NeRF with Mixed-Feature Hash Table

1 code implementation25 Apr 2023 YongJae lee, Li Yang, Deliang Fan

Neural radiance field (NeRF) has shown remarkable performance in generating photo-realistic novel views.


Efficient Self-supervised Continual Learning with Progressive Task-correlated Layer Freezing

no code implementations13 Mar 2023 Li Yang, Sen Lin, Fan Zhang, Junshan Zhang, Deliang Fan

Inspired by the success of Self-supervised learning (SSL) in learning visual representations from unlabeled data, a few recent works have studied SSL in the context of continual learning (CL), where multiple tasks are learned sequentially, giving rise to a new paradigm, namely self-supervised continual learning (SSCL).

Continual Learning Self-Supervised Learning

Get More at Once: Alternating Sparse Training with Gradient Correction

1 code implementation NIPS 2022 Li Yang, Jian Meng, Jae-sun Seo, Deliang Fan

In this work, for the first time, we propose a novel alternating sparse training (AST) scheme to train multiple sparse sub-nets for dynamic inference without extra training cost compared to the case of training a single sparse model from scratch.

Beyond Not-Forgetting: Continual Learning with Backward Knowledge Transfer

no code implementations1 Nov 2022 Sen Lin, Li Yang, Deliang Fan, Junshan Zhang

By learning a sequence of tasks continually, an agent in continual learning (CL) can improve the learning performance of both a new task and `old' tasks by leveraging the forward knowledge transfer and the backward knowledge transfer, respectively.

Continual Learning Transfer Learning

ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning

1 code implementation CVPR 2022 Jingtao Li, Adnan Siraj Rakin, Xing Chen, Zhezhi He, Deliang Fan, Chaitali Chakrabarti

While such a scheme helps reduce the computational load at the client end, it opens itself to reconstruction of raw data from intermediate activation by the server.

Federated Learning

TRGP: Trust Region Gradient Projection for Continual Learning

1 code implementation ICLR 2022 Sen Lin, Li Yang, Deliang Fan, Junshan Zhang

To tackle this challenge, we propose Trust Region Gradient Projection (TRGP) for continual learning to facilitate the forward knowledge transfer based on an efficient characterization of task correlation.

Continual Learning Transfer Learning

Contrastive Dual Gating: Learning Sparse Features With Contrastive Learning

no code implementations CVPR 2022 Jian Meng, Li Yang, Jinwoo Shin, Deliang Fan, Jae-sun Seo

Contrastive learning (or its variants) has recently become a promising direction in the self-supervised learning domain, achieving similar performance as supervised learning with minimum fine-tuning.

Contrastive Learning Self-Supervised Learning

Rep-Net: Efficient On-Device Learning via Feature Reprogramming

no code implementations CVPR 2022 Li Yang, Adnan Siraj Rakin, Deliang Fan

To develop memory-efficient on-device transfer learning, in this work, we are the first to approach the concept of transfer learning from a new perspective of intermediate feature reprogramming of a pre-trained model (i. e., backbone).

Transfer Learning

DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in Memories

no code implementations8 Nov 2021 Adnan Siraj Rakin, Md Hafizul Islam Chowdhuryy, Fan Yao, Deliang Fan

Secondly, we propose a novel substitute model training algorithm with Mean Clustering weight penalty, which leverages the partial leaked bit information effectively and generates a substitute prototype of the target victim model.

Model extraction

GROWN: GRow Only When Necessary for Continual Learning

no code implementations3 Oct 2021 Li Yang, Sen Lin, Junshan Zhang, Deliang Fan

To address this issue, continual learning has been developed to learn new tasks sequentially and perform knowledge transfer from the old tasks to the new ones without forgetting.

Continual Learning Transfer Learning

RADAR: Run-time Adversarial Weight Attack Detection and Accuracy Recovery

1 code implementation20 Jan 2021 Jingtao Li, Adnan Siraj Rakin, Zhezhi He, Deliang Fan, Chaitali Chakrabarti

In this work, we propose RADAR, a Run-time adversarial weight Attack Detection and Accuracy Recovery scheme to protect DNN weights against PBFA.

$DA^3$:Dynamic Additive Attention Adaption for Memory-EfficientOn-Device Multi-Domain Learning

no code implementations2 Dec 2020 Li Yang, Adnan Siraj Rakin, Deliang Fan

We observe that large memory used for activation storage is the bottleneck that largely limits the training time and cost on edge devices.

Deep Attention Domain Adaptation

MetaGater: Fast Learning of Conditional Channel Gated Networks via Federated Meta-Learning

no code implementations25 Nov 2020 Sen Lin, Li Yang, Zhezhi He, Deliang Fan, Junshan Zhang

In this work, we advocate a holistic approach to jointly train the backbone network and the channel gating which enables dynamical selection of a subset of filters for more efficient local computation given the data input.

Meta-Learning Quantization

Deep-Dup: An Adversarial Weight Duplication Attack Framework to Crush Deep Neural Network in Multi-Tenant FPGA

no code implementations5 Nov 2020 Adnan Siraj Rakin, Yukui Luo, Xiaolin Xu, Deliang Fan

Specifically, she can aggressively overload the shared power distribution system of FPGA with malicious power-plundering circuits, achieving adversarial weight duplication (AWD) hardware attack that duplicates certain DNN weight packages during data transmission between off-chip memory and on-chip buffer, to hijack the DNN function of the victim tenant.

Adversarial Attack Cloud Computing +3

KSM: Fast Multiple Task Adaption via Kernel-wise Soft Mask Learning

no code implementations CVPR 2021 Li Yang, Zhezhi He, Junshan Zhang, Deliang Fan

Thus motivated, we propose a new training method called \textit{kernel-wise Soft Mask} (KSM), which learns a kernel-wise hybrid binary and real-value soft mask for each task, while using the same backbone model.

Continual Learning

A Progressive Sub-Network Searching Framework for Dynamic Inference

no code implementations11 Sep 2020 Li Yang, Zhezhi He, Yu Cao, Deliang Fan

Many techniques have been developed, such as model compression, to make Deep Neural Networks (DNNs) inference more efficiently.

Model Compression

T-BFA: Targeted Bit-Flip Adversarial Weight Attack

2 code implementations24 Jul 2020 Adnan Siraj Rakin, Zhezhi He, Jingtao Li, Fan Yao, Chaitali Chakrabarti, Deliang Fan

Prior works of BFA focus on un-targeted attack that can hack all inputs into a random output class by flipping a very small number of weight bits stored in computer memory.

Adversarial Attack Image Classification

DeepHammer: Depleting the Intelligence of Deep Neural Networks through Targeted Chain of Bit Flips

no code implementations30 Mar 2020 Fan Yao, Adnan Siraj Rakin, Deliang Fan

Security of machine learning is increasingly becoming a major concern due to the ubiquitous deployment of deep learning in many security-sensitive domains.

Representable Matrices: Enabling High Accuracy Analog Computation for Inference of DNNs using Memristors

no code implementations27 Nov 2019 Baogang Zhang, Necati Uysal, Deliang Fan, Rickard Ewetz

In this paper, a technique that aims to produce the correct output for every input vector is proposed, which involves specifying the memristor conductance values and a scaling factor realized by the peripheral circuitry.

TBT: Targeted Neural Network Attack with Bit Trojan

3 code implementations CVPR 2020 Adnan Siraj Rakin, Zhezhi He, Deliang Fan

However, when the attacker activates the trigger by embedding it with any input, the network is forced to classify all inputs to a certain target class.

Non-Structured DNN Weight Pruning -- Is It Beneficial in Any Platform?

no code implementations3 Jul 2019 Xiaolong Ma, Sheng Lin, Shaokai Ye, Zhezhi He, Linfeng Zhang, Geng Yuan, Sia Huat Tan, Zhengang Li, Deliang Fan, Xuehai Qian, Xue Lin, Kaisheng Ma, Yanzhi Wang

Based on the proposed comparison framework, with the same accuracy and quantization, the results show that non-structrued pruning is not competitive in terms of both storage and computation efficiency.

Model Compression Quantization

Defending Against Adversarial Attacks Using Random Forests

no code implementations16 Jun 2019 Yifan Ding, Liqiang Wang, huan zhang, Jin-Feng Yi, Deliang Fan, Boqing Gong

As deep neural networks (DNNs) have become increasingly important and popular, the robustness of DNNs is the key to the safety of both the Internet and the physical world.

Robust Sparse Regularization: Simultaneously Optimizing Neural Network Robustness and Compactness

no code implementations30 May 2019 Adnan Siraj Rakin, Zhezhi He, Li Yang, Yanzhi Wang, Liqiang Wang, Deliang Fan

In this work, we show that shrinking the model size through proper weight pruning can even be helpful to improve the DNN robustness under adversarial attack.

Adversarial Attack

Processing-In-Memory Acceleration of Convolutional Neural Networks for Energy-Efficiency, and Power-Intermittency Resilience

no code implementations16 Apr 2019 Arman Roohi, Shaahin Angizi, Deliang Fan, Ronald F. DeMara

Herein, a bit-wise Convolutional Neural Network (CNN) in-memory accelerator is implemented using Spin-Orbit Torque Magnetic Random Access Memory (SOT-MRAM) computational sub-arrays.

Bit-Flip Attack: Crushing Neural Network with Progressive Bit Search

1 code implementation ICCV 2019 Adnan Siraj Rakin, Zhezhi He, Deliang Fan

Several important security issues of Deep Neural Network (DNN) have been raised recently associated with different applications and components.

Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness against Adversarial Attack

1 code implementation CVPR 2019 Adnan Siraj Rakin, Zhezhi He, Deliang Fan

Training the network with Gaussian noise is an effective technique to perform model regularization, thus improving model robustness against input variation.

Adversarial Attack Adversarial Defense +1

Simultaneously Optimizing Weight and Quantizer of Ternary Neural Network using Truncated Gaussian Approximation

no code implementations CVPR 2019 Zhezhi He, Deliang Fan

In the past years, Deep convolution neural network has achieved great success in many artificial intelligence applications.

A Semi-Supervised Two-Stage Approach to Learning from Noisy Labels

no code implementations8 Feb 2018 Yifan Ding, Liqiang Wang, Deliang Fan, Boqing Gong

In the first stage, we identify a small portion of images from the noisy training set of which the labels are correct with a high probability.

Vocal Bursts Valence Prediction

Blind Pre-Processing: A Robust Defense Method Against Adversarial Examples

no code implementations5 Feb 2018 Adnan Siraj Rakin, Zhezhi He, Boqing Gong, Deliang Fan

Blind pre-processing improves the white box attack accuracy of MNIST from 94. 3\% to 98. 7\%.

Adversarial Attack

Developing All-Skyrmion Spiking Neural Network

no code implementations8 May 2017 Zhezhi He, Deliang Fan

In this work, we have proposed a revolutionary neuromorphic computing methodology to implement All-Skyrmion Spiking Neural Network (AS-SNN).

Handwritten Digit Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.