no code implementations • 29 May 2024 • Zhaoliang Zhang, Tianchen Song, YongJae lee, Li Yang, Cheng Peng, Rama Chellappa, Deliang Fan
Recently, 3D Gaussian Splatting (3DGS) has become one of the mainstream methodologies for novel view synthesis (NVS) due to its high quality and fast rendering speed.
1 code implementation • 28 May 2024 • YongJae lee, Zhaoliang Zhang, Deliang Fan
3D Gaussian Splatting (3DGS) has made significant strides in novel view synthesis.
1 code implementation • 25 Apr 2023 • YongJae lee, Li Yang, Deliang Fan
Neural radiance field (NeRF) has shown remarkable performance in generating photo-realistic novel views.
no code implementations • 13 Mar 2023 • Jingtao Li, Adnan Siraj Rakin, Xing Chen, Li Yang, Zhezhi He, Deliang Fan, Chaitali Chakrabarti
We show that under practical cases, the proposed ME attacks work exceptionally well for SFL.
no code implementations • 13 Mar 2023 • Li Yang, Sen Lin, Fan Zhang, Junshan Zhang, Deliang Fan
Inspired by the success of Self-supervised learning (SSL) in learning visual representations from unlabeled data, a few recent works have studied SSL in the context of continual learning (CL), where multiple tasks are learned sequentially, giving rise to a new paradigm, namely self-supervised continual learning (SSCL).
1 code implementation • NIPS 2022 • Li Yang, Jian Meng, Jae-sun Seo, Deliang Fan
In this work, for the first time, we propose a novel alternating sparse training (AST) scheme to train multiple sparse sub-nets for dynamic inference without extra training cost compared to the case of training a single sparse model from scratch.
no code implementations • 1 Nov 2022 • Sen Lin, Li Yang, Deliang Fan, Junshan Zhang
By learning a sequence of tasks continually, an agent in continual learning (CL) can improve the learning performance of both a new task and `old' tasks by leveraging the forward knowledge transfer and the backward knowledge transfer, respectively.
1 code implementation • CVPR 2022 • Jingtao Li, Adnan Siraj Rakin, Xing Chen, Zhezhi He, Deliang Fan, Chaitali Chakrabarti
While such a scheme helps reduce the computational load at the client end, it opens itself to reconstruction of raw data from intermediate activation by the server.
1 code implementation • ICLR 2022 • Sen Lin, Li Yang, Deliang Fan, Junshan Zhang
To tackle this challenge, we propose Trust Region Gradient Projection (TRGP) for continual learning to facilitate the forward knowledge transfer based on an efficient characterization of task correlation.
no code implementations • CVPR 2022 • Jian Meng, Li Yang, Jinwoo Shin, Deliang Fan, Jae-sun Seo
Contrastive learning (or its variants) has recently become a promising direction in the self-supervised learning domain, achieving similar performance as supervised learning with minimum fine-tuning.
no code implementations • CVPR 2022 • Li Yang, Adnan Siraj Rakin, Deliang Fan
To develop memory-efficient on-device transfer learning, in this work, we are the first to approach the concept of transfer learning from a new perspective of intermediate feature reprogramming of a pre-trained model (i. e., backbone).
no code implementations • 8 Nov 2021 • Adnan Siraj Rakin, Md Hafizul Islam Chowdhuryy, Fan Yao, Deliang Fan
Secondly, we propose a novel substitute model training algorithm with Mean Clustering weight penalty, which leverages the partial leaked bit information effectively and generates a substitute prototype of the target victim model.
no code implementations • 3 Oct 2021 • Li Yang, Sen Lin, Junshan Zhang, Deliang Fan
To address this issue, continual learning has been developed to learn new tasks sequentially and perform knowledge transfer from the old tasks to the new ones without forgetting.
no code implementations • 22 Mar 2021 • Adnan Siraj Rakin, Li Yang, Jingtao Li, Fan Yao, Chaitali Chakrabarti, Yu Cao, Jae-sun Seo, Deliang Fan
Apart from recovering the inference accuracy, our RA-BNN after growing also shows significantly higher resistance to BFA.
1 code implementation • 20 Jan 2021 • Jingtao Li, Adnan Siraj Rakin, Zhezhi He, Deliang Fan, Chaitali Chakrabarti
In this work, we propose RADAR, a Run-time adversarial weight Attack Detection and Accuracy Recovery scheme to protect DNN weights against PBFA.
no code implementations • 2 Dec 2020 • Li Yang, Adnan Siraj Rakin, Deliang Fan
We observe that large memory used for activation storage is the bottleneck that largely limits the training time and cost on edge devices.
no code implementations • 25 Nov 2020 • Sen Lin, Li Yang, Zhezhi He, Deliang Fan, Junshan Zhang
In this work, we advocate a holistic approach to jointly train the backbone network and the channel gating which enables dynamical selection of a subset of filters for more efficient local computation given the data input.
1 code implementation • 5 Nov 2020 • Adnan Siraj Rakin, Yukui Luo, Xiaolin Xu, Deliang Fan
Specifically, she can aggressively overload the shared power distribution system of FPGA with malicious power-plundering circuits, achieving adversarial weight duplication (AWD) hardware attack that duplicates certain DNN weight packages during data transmission between off-chip memory and on-chip buffer, to hijack the DNN function of the victim tenant.
no code implementations • CVPR 2021 • Li Yang, Zhezhi He, Junshan Zhang, Deliang Fan
Thus motivated, we propose a new training method called \textit{kernel-wise Soft Mask} (KSM), which learns a kernel-wise hybrid binary and real-value soft mask for each task, while using the same backbone model.
no code implementations • 11 Sep 2020 • Li Yang, Zhezhi He, Yu Cao, Deliang Fan
Many techniques have been developed, such as model compression, to make Deep Neural Networks (DNNs) inference more efficiently.
2 code implementations • 24 Jul 2020 • Adnan Siraj Rakin, Zhezhi He, Jingtao Li, Fan Yao, Chaitali Chakrabarti, Deliang Fan
Prior works of BFA focus on un-targeted attack that can hack all inputs into a random output class by flipping a very small number of weight bits stored in computer memory.
no code implementations • 30 Mar 2020 • Fan Yao, Adnan Siraj Rakin, Deliang Fan
Security of machine learning is increasingly becoming a major concern due to the ubiquitous deployment of deep learning in many security-sensitive domains.
no code implementations • 27 Nov 2019 • Baogang Zhang, Necati Uysal, Deliang Fan, Rickard Ewetz
In this paper, a technique that aims to produce the correct output for every input vector is proposed, which involves specifying the memristor conductance values and a scaling factor realized by the peripheral circuitry.
3 code implementations • CVPR 2020 • Adnan Siraj Rakin, Zhezhi He, Deliang Fan
However, when the attacker activates the trigger by embedding it with any input, the network is forced to classify all inputs to a certain target class.
no code implementations • 3 Jul 2019 • Xiaolong Ma, Sheng Lin, Shaokai Ye, Zhezhi He, Linfeng Zhang, Geng Yuan, Sia Huat Tan, Zhengang Li, Deliang Fan, Xuehai Qian, Xue Lin, Kaisheng Ma, Yanzhi Wang
Based on the proposed comparison framework, with the same accuracy and quantization, the results show that non-structrued pruning is not competitive in terms of both storage and computation efficiency.
no code implementations • 16 Jun 2019 • Yifan Ding, Liqiang Wang, huan zhang, Jin-Feng Yi, Deliang Fan, Boqing Gong
As deep neural networks (DNNs) have become increasingly important and popular, the robustness of DNNs is the key to the safety of both the Internet and the physical world.
no code implementations • 30 May 2019 • Adnan Siraj Rakin, Zhezhi He, Li Yang, Yanzhi Wang, Liqiang Wang, Deliang Fan
In this work, we show that shrinking the model size through proper weight pruning can even be helpful to improve the DNN robustness under adversarial attack.
no code implementations • 16 Apr 2019 • Arman Roohi, Shaahin Angizi, Deliang Fan, Ronald F. DeMara
Herein, a bit-wise Convolutional Neural Network (CNN) in-memory accelerator is implemented using Spin-Orbit Torque Magnetic Random Access Memory (SOT-MRAM) computational sub-arrays.
1 code implementation • ICCV 2019 • Adnan Siraj Rakin, Zhezhi He, Deliang Fan
Several important security issues of Deep Neural Network (DNN) have been raised recently associated with different applications and components.
1 code implementation • CVPR 2019 • Adnan Siraj Rakin, Zhezhi He, Deliang Fan
Training the network with Gaussian noise is an effective technique to perform model regularization, thus improving model robustness against input variation.
no code implementations • CVPR 2019 • Zhezhi He, Deliang Fan
In the past years, Deep convolution neural network has achieved great success in many artificial intelligence applications.
no code implementations • 20 Jul 2018 • Zhezhi He, Boqing Gong, Deliang Fan
Deep convolution neural network has achieved great success in many artificial intelligence applications.
no code implementations • 18 Jul 2018 • Adnan Siraj Rakin, Jin-Feng Yi, Boqing Gong, Deliang Fan
Recent studies have shown that deep neural networks (DNNs) are vulnerable to adversarial attacks.
no code implementations • 8 Feb 2018 • Yifan Ding, Liqiang Wang, Deliang Fan, Boqing Gong
In the first stage, we identify a small portion of images from the noisy training set of which the labels are correct with a high probability.
no code implementations • 5 Feb 2018 • Adnan Siraj Rakin, Zhezhi He, Boqing Gong, Deliang Fan
Blind pre-processing improves the white box attack accuracy of MNIST from 94. 3\% to 98. 7\%.
no code implementations • 8 May 2017 • Zhezhi He, Deliang Fan
In this work, we have proposed a revolutionary neuromorphic computing methodology to implement All-Skyrmion Spiking Neural Network (AS-SNN).