Search Results for author: Juncheng B Li

Found 5 papers, 1 papers with code

Error-aware Quantization through Noise Tempering

no code implementations11 Dec 2022 Zheng Wang, Juncheng B Li, Shuhui Qu, Florian Metze, Emma Strubell

In this work, we incorporate exponentially decaying quantization-error-aware noise together with a learnable scale of task loss gradient to approximate the effect of a quantization operator.

Model Compression Quantization

SQuAT: Sharpness- and Quantization-Aware Training for BERT

no code implementations13 Oct 2022 Zheng Wang, Juncheng B Li, Shuhui Qu, Florian Metze, Emma Strubell

Quantization is an effective technique to reduce memory footprint, inference latency, and power consumption of deep learning models.

Quantization

End-to-end Quantized Training via Log-Barrier Extensions

no code implementations1 Jan 2021 Juncheng B Li, Shuhui Qu, Xinjian Li, Emma Strubell, Florian Metze

Quantization of neural network parameters and activations has emerged as a successful approach to reducing the model size and inference time on hardware that sup-ports native low-precision arithmetic.

Quantization

Audio-Visual Event Recognition through the lens of Adversary

1 code implementation15 Nov 2020 Juncheng B Li, Kaixin Ma, Shuhui Qu, Po-Yao Huang, Florian Metze

This work aims to study several key questions related to multimodal learning through the lens of adversarial noises: 1) The trade-off between early/middle/late fusion affecting its robustness and accuracy 2) How do different frequency/time domain features contribute to the robustness?

RTC-VAE: HARNESSING THE PECULIARITY OF TOTAL CORRELATION IN LEARNING DISENTANGLED REPRESENTATIONS

no code implementations25 Sep 2019 Ze Cheng, Juncheng B Li, Chenxu Wang, Jixuan Gu, Hao Xu, Xinjian Li, Florian Metze

In the problem of unsupervised learning of disentangled representations, one of the promising methods is to penalize the total correlation of sampled latent vari-ables.

Disentanglement

Cannot find the paper you are looking for? You can Submit a new open access paper.