Search Results for author: Huanrui Yang

Found 16 papers, 9 papers with code

CSQ: Growing Mixed-Precision Quantization Scheme with Bi-level Continuous Sparsification

no code implementations6 Dec 2022 Lirui Xiao, Huanrui Yang, Zhen Dong, Kurt Keutzer, Li Du, Shanghang Zhang

CSQ stabilizes the bit-level mixed-precision training process with a bi-level gradual continuous sparsification on both the bit values of the quantized weights and the bit selection in determining the quantization precision of each layer.


NoisyQuant: Noisy Bias-Enhanced Post-Training Activation Quantization for Vision Transformers

no code implementations29 Nov 2022 Yijiang Liu, Huanrui Yang, Zhen Dong, Kurt Keutzer, Li Du, Shanghang Zhang

Building on the theoretical insight, NoisyQuant achieves the first success on actively altering the heavy-tailed activation distribution with additive noisy bias to fit a given quantizer.


HERO: Hessian-Enhanced Robust Optimization for Unifying and Improving Generalization and Quantization Performance

1 code implementation23 Nov 2021 Huanrui Yang, Xiaoxuan Yang, Neil Zhenqiang Gong, Yiran Chen

We therefore propose HERO, a Hessian-enhanced robust optimization method, to minimize the Hessian eigenvalues through a gradient-based training process, simultaneously improving the generalization and quantization performance.


NViT: Vision Transformer Compression and Parameter Redistribution

no code implementations10 Oct 2021 Huanrui Yang, Hongxu Yin, Pavlo Molchanov, Hai Li, Jan Kautz

On ImageNet-1K, we prune the DEIT-Base (Touvron et al., 2021) model to a 2. 6x FLOPs reduction, 5. 1x parameter reduction, and 1. 9x run-time speedup with only 0. 07% loss in accuracy.

Soteria: Provable Defense Against Privacy Leakage in Federated Learning From Representation Perspective

1 code implementation CVPR 2021 Jingwei Sun, Ang Li, Binghui Wang, Huanrui Yang, Hai Li, Yiran Chen

The key idea of our defense is learning to perturb data representation such that the quality of the reconstructed data is severely degraded, while FL performance is maintained.

Federated Learning Inference Attack

Can Targeted Adversarial Examples Transfer When the Source and Target Models Have No Label Space Overlap?

no code implementations17 Mar 2021 Nathan Inkawhich, Kevin J Liang, Jingyang Zhang, Huanrui Yang, Hai Li, Yiran Chen

During the online phase of the attack, we then leverage representations of highly related proxy classes from the whitebox distribution to fool the blackbox model into predicting the desired target class.

BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization

1 code implementation ICLR 2021 Huanrui Yang, Lin Duan, Yiran Chen, Hai Li

Mixed-precision quantization can potentially achieve the optimal tradeoff between performance and compression rate of deep neural networks, and thus, have been widely investigated.

Neural Architecture Search Quantization

Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective

3 code implementations8 Dec 2020 Jingwei Sun, Ang Li, Binghui Wang, Huanrui Yang, Hai Li, Yiran Chen

In this work, we show our key observation that the data representation leakage from gradients is the essential cause of privacy leakage in FL.

Federated Learning

TIPRDC: Task-Independent Privacy-Respecting Data Crowdsourcing Framework for Deep Learning with Anonymized Intermediate Representations

no code implementations23 May 2020 Ang Li, Yixiao Duan, Huanrui Yang, Yiran Chen, Jianlei Yang

The goal of this framework is to learn a feature extractor that can hide the privacy information from the intermediate representations; while maximally retaining the original information embedded in the raw data for the data collector to accomplish unknown learning tasks.

Learning Low-rank Deep Neural Networks via Singular Vector Orthogonality Regularization and Singular Value Sparsification

1 code implementation20 Apr 2020 Huanrui Yang, Minxue Tang, Wei Wen, Feng Yan, Daniel Hu, Ang Li, Hai Li, Yiran Chen

In this work, we propose SVD training, the first method to explicitly achieve low-rank DNNs during training without applying SVD on every step.

Exploring Bit-Slice Sparsity in Deep Neural Networks for Efficient ReRAM-Based Deployment

1 code implementation18 Sep 2019 Jingyang Zhang, Huanrui Yang, Fan Chen, Yitu Wang, Hai Li

However, the power hungry analog-to-digital converters (ADCs) prevent the practical deployment of ReRAM-based DNN accelerators on end devices with limited chip area and power budget.

DeepObfuscator: Obfuscating Intermediate Representations with Privacy-Preserving Adversarial Learning on Smartphones

no code implementations9 Sep 2019 Ang Li, Jiayi Guo, Huanrui Yang, Flora D. Salim, Yiran Chen

Our experiments on CelebA and LFW datasets show that the quality of the reconstructed images from the obfuscated features of the raw image is dramatically decreased from 0. 9458 to 0. 3175 in terms of multi-scale structural similarity.

General Classification Image Classification +3

DeepHoyer: Learning Sparser Neural Network with Differentiable Scale-Invariant Sparsity Measures

1 code implementation ICLR 2020 Huanrui Yang, Wei Wen, Hai Li

Inspired by the Hoyer measure (the ratio between L1 and L2 norms) used in traditional compressed sensing problems, we present DeepHoyer, a set of sparsity-inducing regularizers that are both differentiable almost everywhere and scale-invariant.

DPatch: An Adversarial Patch Attack on Object Detectors

1 code implementation5 Jun 2018 Xin Liu, Huanrui Yang, Ziwei Liu, Linghao Song, Hai Li, Yiran Chen

Successful realization of DPatch also illustrates the intrinsic vulnerability of the modern detector architectures to such patch-based adversarial attacks.

MAT: A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks

no code implementations27 May 2017 Chang Song, Hsin-Pai Cheng, Huanrui Yang, Sicheng Li, Chunpeng Wu, Qing Wu, Hai Li, Yiran Chen

Our experiments show that different adversarial strengths, i. e., perturbation levels of adversarial examples, have different working zones to resist the attack.

Cannot find the paper you are looking for? You can Submit a new open access paper.