no code implementations • 18 Jan 2023 • Jingchi Zhang, Huanrui Yang, Hai Li
We propose a new prespective on exploring the intrinsic diversity within a model architecture to build efficient DNN ensemble.
no code implementations • 6 Dec 2022 • Lirui Xiao, Huanrui Yang, Zhen Dong, Kurt Keutzer, Li Du, Shanghang Zhang
CSQ stabilizes the bit-level mixed-precision training process with a bi-level gradual continuous sparsification on both the bit values of the quantized weights and the bit selection in determining the quantization precision of each layer.
no code implementations • 29 Nov 2022 • Yijiang Liu, Huanrui Yang, Zhen Dong, Kurt Keutzer, Li Du, Shanghang Zhang
Building on the theoretical insight, NoisyQuant achieves the first success on actively altering the heavy-tailed activation distribution with additive noisy bias to fit a given quantizer.
1 code implementation • 23 Nov 2021 • Huanrui Yang, Xiaoxuan Yang, Neil Zhenqiang Gong, Yiran Chen
We therefore propose HERO, a Hessian-enhanced robust optimization method, to minimize the Hessian eigenvalues through a gradient-based training process, simultaneously improving the generalization and quantization performance.
no code implementations • 10 Oct 2021 • Huanrui Yang, Hongxu Yin, Pavlo Molchanov, Hai Li, Jan Kautz
On ImageNet-1K, we prune the DEIT-Base (Touvron et al., 2021) model to a 2. 6x FLOPs reduction, 5. 1x parameter reduction, and 1. 9x run-time speedup with only 0. 07% loss in accuracy.
1 code implementation • CVPR 2021 • Jingwei Sun, Ang Li, Binghui Wang, Huanrui Yang, Hai Li, Yiran Chen
The key idea of our defense is learning to perturb data representation such that the quality of the reconstructed data is severely degraded, while FL performance is maintained.
no code implementations • 17 Mar 2021 • Nathan Inkawhich, Kevin J Liang, Jingyang Zhang, Huanrui Yang, Hai Li, Yiran Chen
During the online phase of the attack, we then leverage representations of highly related proxy classes from the whitebox distribution to fool the blackbox model into predicting the desired target class.
1 code implementation • ICLR 2021 • Huanrui Yang, Lin Duan, Yiran Chen, Hai Li
Mixed-precision quantization can potentially achieve the optimal tradeoff between performance and compression rate of deep neural networks, and thus, have been widely investigated.
4 code implementations • 8 Dec 2020 • Jingwei Sun, Ang Li, Binghui Wang, Huanrui Yang, Hai Li, Yiran Chen
In this work, we show our key observation that the data representation leakage from gradients is the essential cause of privacy leakage in FL.
3 code implementations • NeurIPS 2020 • Huanrui Yang, Jingyang Zhang, Hongliang Dong, Nathan Inkawhich, Andrew Gardner, Andrew Touchet, Wesley Wilkes, Heath Berry, Hai Li
The process is hard, often requires models with large capacity, and suffers from significant loss on clean data accuracy.
no code implementations • 23 May 2020 • Ang Li, Yixiao Duan, Huanrui Yang, Yiran Chen, Jianlei Yang
The goal of this framework is to learn a feature extractor that can hide the privacy information from the intermediate representations; while maximally retaining the original information embedded in the raw data for the data collector to accomplish unknown learning tasks.
1 code implementation • 20 Apr 2020 • Huanrui Yang, Minxue Tang, Wei Wen, Feng Yan, Daniel Hu, Ang Li, Hai Li, Yiran Chen
In this work, we propose SVD training, the first method to explicitly achieve low-rank DNNs during training without applying SVD on every step.
1 code implementation • 18 Sep 2019 • Jingyang Zhang, Huanrui Yang, Fan Chen, Yitu Wang, Hai Li
However, the power hungry analog-to-digital converters (ADCs) prevent the practical deployment of ReRAM-based DNN accelerators on end devices with limited chip area and power budget.
no code implementations • 9 Sep 2019 • Ang Li, Jiayi Guo, Huanrui Yang, Flora D. Salim, Yiran Chen
Our experiments on CelebA and LFW datasets show that the quality of the reconstructed images from the obfuscated features of the raw image is dramatically decreased from 0. 9458 to 0. 3175 in terms of multi-scale structural similarity.
1 code implementation • ICLR 2020 • Huanrui Yang, Wei Wen, Hai Li
Inspired by the Hoyer measure (the ratio between L1 and L2 norms) used in traditional compressed sensing problems, we present DeepHoyer, a set of sparsity-inducing regularizers that are both differentiable almost everywhere and scale-invariant.
1 code implementation • 5 Jun 2018 • Xin Liu, Huanrui Yang, Ziwei Liu, Linghao Song, Hai Li, Yiran Chen
Successful realization of DPatch also illustrates the intrinsic vulnerability of the modern detector architectures to such patch-based adversarial attacks.
no code implementations • 27 May 2017 • Chang Song, Hsin-Pai Cheng, Huanrui Yang, Sicheng Li, Chunpeng Wu, Qing Wu, Hai Li, Yiran Chen
Our experiments show that different adversarial strengths, i. e., perturbation levels of adversarial examples, have different working zones to resist the attack.