Search Results for author: Qian Lou

Found 20 papers, 2 papers with code

SSL-Cleanse: Trojan Detection and Mitigation in Self-Supervised Learning

no code implementations16 Mar 2023 Mengxin Zheng, Jiaqi Xue, Xun Chen, Lei Jiang, Qian Lou

By using a pre-trained SSL image encoder and training a downstream classifier on top of it, impressive performance can be achieved on various tasks with very little labeled data.

Self-Supervised Learning

TrojText: Test-time Invisible Textual Trojan Insertion

1 code implementation3 Mar 2023 Yepeng Liu, Bo Feng, Qian Lou

This paper proposes a solution called TrojText, which aims to determine whether invisible textual Trojan attacks can be performed more efficiently and cost-effectively without training data.


ESTAS: Effective and Stable Trojan Attacks in Self-supervised Encoders with One Target Unlabelled Sample

no code implementations20 Nov 2022 Jiaqi Xue, Qian Lou

Emerging self-supervised learning (SSL) has become a popular image representation encoding method to obviate the reliance on labeled data and learn rich representations from large-scale, ubiquitous unlabelled data.

Self-Supervised Learning

Numerical Optimizations for Weighted Low-rank Estimation on Language Model

no code implementations2 Nov 2022 Ting Hua, Yen-Chang Hsu, Felicity Wang, Qian Lou, Yilin Shen, Hongxia Jin

However, standard SVD treats the parameters within the matrix with equal importance, which is a simple but unrealistic assumption.

Language Modelling

TrojViT: Trojan Insertion in Vision Transformers

no code implementations27 Aug 2022 Mengxin Zheng, Qian Lou, Lei Jiang

The success of ViTs motivates adversaries to perform backdoor attacks on ViTs.

Backdoor Attack

Lite-MDETR: A Lightweight Multi-Modal Detector

no code implementations CVPR 2022 Qian Lou, Yen-Chang Hsu, Burak Uzkent, Ting Hua, Yilin Shen, Hongxia Jin

The key primitive is that Dictionary-Lookup-Transformormations (DLT) is proposed to replace Linear Transformation (LT) in multi-modal detectors where each weight in Linear Transformation (LT) is approximately factorized into a smaller dictionary, index, and coefficient.

object-detection Object Detection +3

Automatic Mixed-Precision Quantization Search of BERT

no code implementations30 Dec 2021 Changsheng Zhao, Ting Hua, Yilin Shen, Qian Lou, Hongxia Jin

Knowledge distillation, Weight pruning, and Quantization are known to be the main directions in model compression.

Knowledge Distillation Model Compression +2

DictFormer: Tiny Transformer with Shared Dictionary

no code implementations ICLR 2022 Qian Lou, Ting Hua, Yen-Chang Hsu, Yilin Shen, Hongxia Jin

DictFormer significantly reduces the redundancy in the transformer's parameters by replacing the prior transformer's parameters with compact, shared dictionary, a few unshared coefficients, and indices.

Abstractive Text Summarization Language Modelling +2

HEMET: A Homomorphic-Encryption-Friendly Privacy-Preserving Mobile Neural Network Architecture

no code implementations31 May 2021 Qian Lou, Lei Jiang

Recently Homomorphic Encryption (HE) is used to implement Privacy-Preserving Neural Networks (PPNNs) that perform inferences directly on encrypted data without decryption.

Privacy Preserving

How to Accelerate Capsule Convolutions in Capsule Networks

no code implementations6 Apr 2021 Zhenhua Chen, Xiwen Li, Qian Lou, David Crandall

How to improve the efficiency of routing procedures in CapsNets has been studied a lot.

SAFENet: A Secure, Accurate and Fast Neural Network Inference

no code implementations ICLR 2021 Qian Lou, Yilin Shen, Hongxia Jin, Lei Jiang

A cryptographic neural network inference service is an efficient way to allow two parties to execute neural network inference without revealing either party’s data or model.

Falcon: Fast Spectral Inference on Encrypted Data

no code implementations NeurIPS 2020 Qian Lou, Wen-jie Lu, Cheng Hong, Lei Jiang

We observed that HENNs have to pay significant computing overhead on rotations, and each of rotations is $\sim 10\times$ more expensive than homomorphic multiplications between ciphertext and plaintext.

CryptoGRU: Low Latency Privacy-Preserving Text Analysis With GRU

no code implementations EMNLP 2021 Bo Feng, Qian Lou, Lei Jiang, Geoffrey C. Fox

Although prior secure networks combine homomorphic encryption (HE) and garbled circuit (GC) to preserve users' privacy, naively adopting the HE and GC hybrid technique to implement RNNs suffers from long inference latency due to slow activation functions.

Privacy Preserving

Helix: Algorithm/Architecture Co-design for Accelerating Nanopore Genome Base-calling

no code implementations4 Aug 2020 Qian Lou, Sarath Janga, Lei Jiang

From architecture perspective, we propose a low-power SOT-MRAM-based ADC array to process analog-to-digital conversion operations and improve power efficiency of prior DNN PIMs.

AutoPrivacy: Automated Layer-wise Parameter Selection for Secure Neural Network Inference

no code implementations NeurIPS 2020 Qian Lou, Song Bian, Lei Jiang

Prior HPPNNs over-pessimistically select huge HE parameters to maintain large noise budgets, since they use the same set of HE parameters for an entire network and ignore the error tolerance capability of a network.

Privacy Preserving

SHE: A Fast and Accurate Deep Neural Network for Encrypted Data

1 code implementation NeurIPS 2019 Qian Lou, Lei Jiang

Since the LTFHE ReLU activations, max poolings, shifts and accumulations have small multiplicative depth overhead, SHE can implement much deeper network architectures with more convolutional and activation layers.


AutoQ: Automated Kernel-Wise Neural Network Quantization

no code implementations ICLR 2020 Qian Lou, Feng Guo, Lantao Liu, Minje Kim, Lei Jiang

Recent network quantization techniques quantize each weight kernel in a convolutional layer independently for higher inference accuracy, since the weight kernels in a layer exhibit different variances and hence have different amounts of redundancy.

AutoML Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.