2 code implementations • 21 Oct 2019 • Loren Lugosch, Brett Meyer, Derek Nowrouzezahrai, Mirco Ravanelli
End-to-end models are an attractive new approach to spoken language understanding (SLU) in which the meaning of an utterance is inferred directly from the raw audio without employing the standard pipeline composed of a separately trained speech recognizer and natural language understanding module.
Ranked #7 on Spoken Language Understanding on Snips-SmartLights
no code implementations • 24 Feb 2022 • Amir Ardakani, Arash Ardakani, Brett Meyer, James J. Clark, Warren J. Gross
Quantization of deep neural networks is a promising approach that reduces the inference cost, making it feasible to run deep networks on resource-restricted devices.
no code implementations • 22 Jan 2024 • Lulan Shen, Ali Edalati, Brett Meyer, Warren Gross, James J. Clark
It is important to investigate the robustness of compressed networks in two types of data distribution shifts: domain shifts and adversarial perturbations.
no code implementations • 24 Jan 2024 • Lulan Shen, Ali Edalati, Brett Meyer, Warren Gross, James J. Clark
This paper describes a simple yet effective technique for refining a pretrained classifier network.
no code implementations • 2 Feb 2024 • Mohammadreza Tayaranian, Seyyed Hasan Mozafari, James J. Clark, Brett Meyer, Warren Gross
In this work, we improve upon the inference latency of the state-of-the-art methods by removing the floating-point operations, which are associated with the GELU activation in Swin Transformer.