no code implementations • 11 Dec 2022 • Zheng Wang, Juncheng B Li, Shuhui Qu, Florian Metze, Emma Strubell
In this work, we incorporate exponentially decaying quantization-error-aware noise together with a learnable scale of task loss gradient to approximate the effect of a quantization operator.
no code implementations • 13 Oct 2022 • Zheng Wang, Juncheng B Li, Shuhui Qu, Florian Metze, Emma Strubell
Quantization is an effective technique to reduce memory footprint, inference latency, and power consumption of deep learning models.
no code implementations • 1 Jan 2021 • Juncheng B Li, Shuhui Qu, Xinjian Li, Emma Strubell, Florian Metze
Quantization of neural network parameters and activations has emerged as a successful approach to reducing the model size and inference time on hardware that sup-ports native low-precision arithmetic.
1 code implementation • 15 Nov 2020 • Juncheng B Li, Kaixin Ma, Shuhui Qu, Po-Yao Huang, Florian Metze
This work aims to study several key questions related to multimodal learning through the lens of adversarial noises: 1) The trade-off between early/middle/late fusion affecting its robustness and accuracy 2) How do different frequency/time domain features contribute to the robustness?
no code implementations • 25 Sep 2019 • Ze Cheng, Juncheng B Li, Chenxu Wang, Jixuan Gu, Hao Xu, Xinjian Li, Florian Metze
In the problem of unsupervised learning of disentangled representations, one of the promising methods is to penalize the total correlation of sampled latent vari-ables.