no code implementations • 16 Mar 2023 • Mengxin Zheng, Jiaqi Xue, Xun Chen, Lei Jiang, Qian Lou
By using a pre-trained SSL image encoder and training a downstream classifier on top of it, impressive performance can be achieved on various tasks with very little labeled data.
1 code implementation • 3 Mar 2023 • Yepeng Liu, Bo Feng, Qian Lou
This paper proposes a solution called TrojText, which aims to determine whether invisible textual Trojan attacks can be performed more efficiently and cost-effectively without training data.
no code implementations • 20 Nov 2022 • Jiaqi Xue, Qian Lou
Emerging self-supervised learning (SSL) has become a popular image representation encoding method to obviate the reliance on labeled data and learn rich representations from large-scale, ubiquitous unlabelled data.
no code implementations • 2 Nov 2022 • Ting Hua, Yen-Chang Hsu, Felicity Wang, Qian Lou, Yilin Shen, Hongxia Jin
However, standard SVD treats the parameters within the matrix with equal importance, which is a simple but unrealistic assumption.
no code implementations • 20 Sep 2022 • Jiaqi Xue, Lei Xu, Lin Chen, Weidong Shi, Kaidi Xu, Qian Lou
(ii) How to design a robust PNet given the encrypted input without decryption?
no code implementations • 27 Aug 2022 • Mengxin Zheng, Qian Lou, Lei Jiang
The success of ViTs motivates adversaries to perform backdoor attacks on ViTs.
no code implementations • ICLR 2022 • Yen-Chang Hsu, Ting Hua, SungEn Chang, Qian Lou, Yilin Shen, Hongxia Jin
In other words, the optimization objective of SVD is not aligned with the trained model's task accuracy.
no code implementations • CVPR 2022 • Qian Lou, Yen-Chang Hsu, Burak Uzkent, Ting Hua, Yilin Shen, Hongxia Jin
The key primitive is that Dictionary-Lookup-Transformormations (DLT) is proposed to replace Linear Transformation (LT) in multi-modal detectors where each weight in Linear Transformation (LT) is approximately factorized into a smaller dictionary, index, and coefficient.
no code implementations • 30 Dec 2021 • Changsheng Zhao, Ting Hua, Yilin Shen, Qian Lou, Hongxia Jin
Knowledge distillation, Weight pruning, and Quantization are known to be the main directions in model compression.
no code implementations • ICLR 2022 • Qian Lou, Ting Hua, Yen-Chang Hsu, Yilin Shen, Hongxia Jin
DictFormer significantly reduces the redundancy in the transformer's parameters by replacing the prior transformer's parameters with compact, shared dictionary, a few unshared coefficients, and indices.
no code implementations • 31 May 2021 • Qian Lou, Lei Jiang
Recently Homomorphic Encryption (HE) is used to implement Privacy-Preserving Neural Networks (PPNNs) that perform inferences directly on encrypted data without decryption.
no code implementations • 6 Apr 2021 • Zhenhua Chen, Xiwen Li, Qian Lou, David Crandall
How to improve the efficiency of routing procedures in CapsNets has been studied a lot.
no code implementations • ICLR 2021 • Qian Lou, Yilin Shen, Hongxia Jin, Lei Jiang
A cryptographic neural network inference service is an efficient way to allow two parties to execute neural network inference without revealing either party’s data or model.
no code implementations • NeurIPS 2020 • Qian Lou, Wen-jie Lu, Cheng Hong, Lei Jiang
We observed that HENNs have to pay significant computing overhead on rotations, and each of rotations is $\sim 10\times$ more expensive than homomorphic multiplications between ciphertext and plaintext.
no code implementations • EMNLP 2021 • Bo Feng, Qian Lou, Lei Jiang, Geoffrey C. Fox
Although prior secure networks combine homomorphic encryption (HE) and garbled circuit (GC) to preserve users' privacy, naively adopting the HE and GC hybrid technique to implement RNNs suffers from long inference latency due to slow activation functions.
no code implementations • 4 Aug 2020 • Qian Lou, Sarath Janga, Lei Jiang
From architecture perspective, we propose a low-power SOT-MRAM-based ADC array to process analog-to-digital conversion operations and improve power efficiency of prior DNN PIMs.
no code implementations • NeurIPS 2020 • Qian Lou, Song Bian, Lei Jiang
Prior HPPNNs over-pessimistically select huge HE parameters to maintain large noise budgets, since they use the same set of HE parameters for an entire network and ignore the error tolerance capability of a network.
no code implementations • NeurIPS 2020 • Qian Lou, Bo Feng, Geoffrey C. Fox, Lei Jiang
Big data is one of the cornerstones to enabling and training deep neural networks (DNNs).
1 code implementation • NeurIPS 2019 • Qian Lou, Lei Jiang
Since the LTFHE ReLU activations, max poolings, shifts and accumulations have small multiplicative depth overhead, SHE can implement much deeper network architectures with more convolutional and activation layers.
no code implementations • ICLR 2020 • Qian Lou, Feng Guo, Lantao Liu, Minje Kim, Lei Jiang
Recent network quantization techniques quantize each weight kernel in a convolutional layer independently for higher inference accuracy, since the weight kernels in a layer exhibit different variances and hence have different amounts of redundancy.