Search Results for author: Qian Lou

Found 22 papers, 3 papers with code

AutoQ: Automated Kernel-Wise Neural Network Quantization

no code implementations ICLR 2020 Qian Lou, Feng Guo, Lantao Liu, Minje Kim, Lei Jiang

Recent network quantization techniques quantize each weight kernel in a convolutional layer independently for higher inference accuracy, since the weight kernels in a layer exhibit different variances and hence have different amounts of redundancy.

AutoML Quantization

SHE: A Fast and Accurate Deep Neural Network for Encrypted Data

1 code implementation NeurIPS 2019 Qian Lou, Lei Jiang

Since the LTFHE ReLU activations, max poolings, shifts and accumulations have small multiplicative depth overhead, SHE can implement much deeper network architectures with more convolutional and activation layers.

Quantization

AutoPrivacy: Automated Layer-wise Parameter Selection for Secure Neural Network Inference

no code implementations NeurIPS 2020 Qian Lou, Song Bian, Lei Jiang

Prior HPPNNs over-pessimistically select huge HE parameters to maintain large noise budgets, since they use the same set of HE parameters for an entire network and ignore the error tolerance capability of a network.

Privacy Preserving

Helix: Algorithm/Architecture Co-design for Accelerating Nanopore Genome Base-calling

no code implementations4 Aug 2020 Qian Lou, Sarath Janga, Lei Jiang

From architecture perspective, we propose a low-power SOT-MRAM-based ADC array to process analog-to-digital conversion operations and improve power efficiency of prior DNN PIMs.

CryptoGRU: Low Latency Privacy-Preserving Text Analysis With GRU

no code implementations EMNLP 2021 Bo Feng, Qian Lou, Lei Jiang, Geoffrey C. Fox

Although prior secure networks combine homomorphic encryption (HE) and garbled circuit (GC) to preserve users' privacy, naively adopting the HE and GC hybrid technique to implement RNNs suffers from long inference latency due to slow activation functions.

Privacy Preserving

Falcon: Fast Spectral Inference on Encrypted Data

no code implementations NeurIPS 2020 Qian Lou, Wen-jie Lu, Cheng Hong, Lei Jiang

We observed that HENNs have to pay significant computing overhead on rotations, and each of rotations is $\sim 10\times$ more expensive than homomorphic multiplications between ciphertext and plaintext.

SAFENet: A Secure, Accurate and Fast Neural Network Inference

no code implementations ICLR 2021 Qian Lou, Yilin Shen, Hongxia Jin, Lei Jiang

A cryptographic neural network inference service is an efficient way to allow two parties to execute neural network inference without revealing either party’s data or model.

How to Accelerate Capsule Convolutions in Capsule Networks

no code implementations6 Apr 2021 Zhenhua Chen, Xiwen Li, Qian Lou, David Crandall

How to improve the efficiency of routing procedures in CapsNets has been studied a lot.

HEMET: A Homomorphic-Encryption-Friendly Privacy-Preserving Mobile Neural Network Architecture

no code implementations31 May 2021 Qian Lou, Lei Jiang

Recently Homomorphic Encryption (HE) is used to implement Privacy-Preserving Neural Networks (PPNNs) that perform inferences directly on encrypted data without decryption.

Privacy Preserving

DictFormer: Tiny Transformer with Shared Dictionary

no code implementations ICLR 2022 Qian Lou, Ting Hua, Yen-Chang Hsu, Yilin Shen, Hongxia Jin

DictFormer significantly reduces the redundancy in the transformer's parameters by replacing the prior transformer's parameters with compact, shared dictionary, a few unshared coefficients, and indices.

Abstractive Text Summarization Language Modelling +2

Automatic Mixed-Precision Quantization Search of BERT

no code implementations30 Dec 2021 Changsheng Zhao, Ting Hua, Yilin Shen, Qian Lou, Hongxia Jin

Knowledge distillation, Weight pruning, and Quantization are known to be the main directions in model compression.

Knowledge Distillation Model Compression +2

Lite-MDETR: A Lightweight Multi-Modal Detector

no code implementations CVPR 2022 Qian Lou, Yen-Chang Hsu, Burak Uzkent, Ting Hua, Yilin Shen, Hongxia Jin

The key primitive is that Dictionary-Lookup-Transformormations (DLT) is proposed to replace Linear Transformation (LT) in multi-modal detectors where each weight in Linear Transformation (LT) is approximately factorized into a smaller dictionary, index, and coefficient.

object-detection Object Detection +3

Numerical Optimizations for Weighted Low-rank Estimation on Language Model

no code implementations2 Nov 2022 Ting Hua, Yen-Chang Hsu, Felicity Wang, Qian Lou, Yilin Shen, Hongxia Jin

However, standard SVD treats the parameters within the matrix with equal importance, which is a simple but unrealistic assumption.

Language Modelling

ESTAS: Effective and Stable Trojan Attacks in Self-supervised Encoders with One Target Unlabelled Sample

no code implementations20 Nov 2022 Jiaqi Xue, Qian Lou

Emerging self-supervised learning (SSL) has become a popular image representation encoding method to obviate the reliance on labeled data and learn rich representations from large-scale, ubiquitous unlabelled data.

Self-Supervised Learning

TrojText: Test-time Invisible Textual Trojan Insertion

1 code implementation3 Mar 2023 Qian Lou, Yepeng Liu, Bo Feng

This paper proposes a solution called TrojText, which aims to determine whether invisible textual Trojan attacks can be performed more efficiently and cost-effectively without training data.

SST-2

SSL-Cleanse: Trojan Detection and Mitigation in Self-Supervised Learning

no code implementations16 Mar 2023 Mengxin Zheng, Jiaqi Xue, ZiHao Wang, Xun Chen, Qian Lou, Lei Jiang, XiaoFeng Wang

We evaluated SSL-Cleanse on various datasets using 1200 encoders, achieving an average detection success rate of 82. 2% on ImageNet-100.

Self-Supervised Learning

TrojFair: Trojan Fairness Attacks

no code implementations16 Dec 2023 Mengxin Zheng, Jiaqi Xue, Yi Sheng, Lei Yang, Qian Lou, Lei Jiang

TrojFair is a stealthy Fairness attack that is resilient to existing model fairness audition detectors since the model for clean inputs is fair.

Fairness

TrojFSP: Trojan Insertion in Few-shot Prompt Tuning

no code implementations16 Dec 2023 Mengxin Zheng, Jiaqi Xue, Xun Chen, Yanshan Wang, Qian Lou, Lei Jiang

However, the security issues, e. g., Trojan attacks, of prompt tuning on a few data samples are not well-studied.

Data Poisoning Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.