Search Results for author: Martin Radfar

Found 17 papers, 1 papers with code

End-to-end spoken language understanding using joint CTC loss and self-supervised, pretrained acoustic encoders

no code implementations4 May 2023 Jixuan Wang, Martin Radfar, Kai Wei, Clement Chung

It is challenging to extract semantic meanings directly from audio signals in spoken language understanding (SLU), due to the lack of textual information.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Sub-8-bit quantization for on-device speech recognition: a regularization-free approach

no code implementations17 Oct 2022 Kai Zhen, Martin Radfar, Hieu Duy Nguyen, Grant P. Strimel, Nathan Susanj, Athanasios Mouchtaris

For on-device automatic speech recognition (ASR), quantization aware training (QAT) is ubiquitous to achieve the trade-off between model predictive performance and efficiency.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

ConvRNN-T: Convolutional Augmented Recurrent Neural Network Transducers for Streaming Speech Recognition

no code implementations29 Sep 2022 Martin Radfar, Rohit Barnwal, Rupak Vignesh Swaminathan, Feng-Ju Chang, Grant P. Strimel, Nathan Susanj, Athanasios Mouchtaris

Very recently, as an alternative to LSTM layers, the Conformer architecture was introduced where the encoder of RNN-T is replaced with a modified Transformer encoder composed of convolutional layers at the frontend and between attention layers.

speech-recognition Speech Recognition

Compute Cost Amortized Transformer for Streaming ASR

no code implementations5 Jul 2022 Yi Xie, Jonathan Macoskey, Martin Radfar, Feng-Ju Chang, Brian King, Ariya Rastrow, Athanasios Mouchtaris, Grant P. Strimel

We present a streaming, Transformer-based end-to-end automatic speech recognition (ASR) architecture which achieves efficient neural inference through compute cost amortization.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Multi-task RNN-T with Semantic Decoder for Streamable Spoken Language Understanding

no code implementations1 Apr 2022 Xuandi Fu, Feng-Ju Chang, Martin Radfar, Kai Wei, Jing Liu, Grant P. Strimel, Kanthashree Mysore Sathyendra

In addition, the NLU model in the two-stage system is not streamable, as it must wait for the audio segments to complete processing, which ultimately impacts the latency of the SLU system.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

Speech Emotion Recognition Using Quaternion Convolutional Neural Networks

no code implementations31 Oct 2021 Aneesh Muppidi, Martin Radfar

Specifically, the model achieves an accuracy of 77. 87\%, 70. 46\%, and 88. 78\% for the RAVDESS, IEMOCAP, and EMO-DB datasets, respectively.

Speech Emotion Recognition speech-recognition +1

FANS: Fusing ASR and NLU for on-device SLU

no code implementations31 Oct 2021 Martin Radfar, Athanasios Mouchtaris, Siegfried Kunzmann, Ariya Rastrow

In this paper, we introduce FANS, a new end-to-end SLU model that fuses an ASR audio encoder to a multi-task NLU decoder to infer the intent, slot tags, and slot values directly from a given input audio, obviating the need for transcription.

Ranked #14 on Spoken Language Understanding on Fluent Speech Commands (using extra training data)

Decoder Spoken Language Understanding

Multi-Channel Transformer Transducer for Speech Recognition

no code implementations30 Aug 2021 Feng-Ju Chang, Martin Radfar, Athanasios Mouchtaris, Maurizio Omologo

In this paper, we present a novel speech recognition model, Multi-Channel Transformer Transducer (MCTT), which features end-to-end multi-channel training, low computation cost, and low latency so that it is suitable for streaming decoding in on-device speech recognition.

speech-recognition Speech Recognition

The Performance Evaluation of Attention-Based Neural ASR under Mixed Speech Input

2 code implementations3 Aug 2021 Bradley He, Martin Radfar

In this paper, we present the mixtures of speech signals to a popular attention-based neural ASR, known as Listen, Attend, and Spell (LAS), at different target-to-interference ratio (TIR) and measure the phoneme error rate.

End-to-End Multi-Channel Transformer for Speech Recognition

no code implementations8 Feb 2021 Feng-Ju Chang, Martin Radfar, Athanasios Mouchtaris, Brian King, Siegfried Kunzmann

Transformers are powerful neural architectures that allow integrating different modalities using attention mechanisms.

Decoder speech-recognition +1

Encoding Syntactic Knowledge in Transformer Encoder for Intent Detection and Slot Filling

no code implementations21 Dec 2020 Jixuan Wang, Kai Wei, Martin Radfar, Weiwei Zhang, Clement Chung

We propose a novel Transformer encoder-based architecture with syntactical knowledge encoded for intent detection and slot filling.

Intent Detection Multi-Task Learning +2

Tie Your Embeddings Down: Cross-Modal Latent Spaces for End-to-end Spoken Language Understanding

no code implementations18 Nov 2020 Bhuvan Agrawal, Markus Müller, Martin Radfar, Samridhi Choudhary, Athanasios Mouchtaris, Siegfried Kunzmann

In this paper, we treat an E2E system as a multi-modal model, with audio and text functioning as its two modalities, and use a cross-modal latent space (CMLS) architecture, where a shared latent space is learned between the `acoustic' and `text' embeddings.

Spoken Language Understanding Triplet

End-to-End Neural Transformer Based Spoken Language Understanding

no code implementations12 Aug 2020 Martin Radfar, Athanasios Mouchtaris, Siegfried Kunzmann

In this paper, we introduce an end-to-end neural transformer-based SLU model that can predict the variable-length domain, intent, and slots vectors embedded in an audio signal with no intermediate token prediction architecture.

Spoken Language Understanding

Cannot find the paper you are looking for? You can Submit a new open access paper.