no code implementations • 21 May 2022 • Abdelrahman Mohamed, Hung-Yi Lee, Lasse Borgholt, Jakob D. Havtorn, Joakim Edin, Christian Igel, Katrin Kirchhoff, Shang-Wen Li, Karen Livescu, Lars Maaløe, Tara N. Sainath, Shinji Watanabe
Although self-supervised speech representation is still a nascent research area, it is closely related to acoustic word embedding and learning with zero lexical resources, both of which have seen active research for many years.
no code implementations • 16 Dec 2021 • Saket Dingliwal, Ashish Shenoy, Sravan Bodapati, Ankur Gandhe, Ravi Teja Gadde, Katrin Kirchhoff
Automatic Speech Recognition (ASR) systems have found their use in numerous industrial applications in very diverse domains creating a need to adapt to new domains with small memory and deployment overhead.
no code implementations • 10 Dec 2021 • Rohit Paturi, Sundararajan Srinivasan, Katrin Kirchhoff, Daniel Garcia-Romero
Also, most of these models are trained with synthetic mixtures and do not generalize to real conversational data.
no code implementations • 30 Nov 2021 • Sundararajan Srinivasan, Zhaocheng Huang, Katrin Kirchhoff
To improve the efficacy of our approach, we propose a novel estimate of the quality of the emotion predictions, to condition teacher-student training.
no code implementations • 13 Oct 2021 • Saket Dingliwal, Ashish Shenoy, Sravan Bodapati, Ankur Gandhe, Ravi Teja Gadde, Katrin Kirchhoff
In this work, we overcome the problem using prompt-tuning, a methodology that trains a small number of domain token embedding parameters to prime a transformer-based LM to a particular domain.
no code implementations • 10 Sep 2021 • Dhanush Bekal, Ashish Shenoy, Monica Sunkara, Sravan Bodapati, Katrin Kirchhoff
Accurate recognition of slot values such as domain specific words or named entities by automatic speech recognition (ASR) systems forms the core of the Goal-oriented Dialogue Systems.
no code implementations • ACL (ECNLP) 2021 • Ashish Shenoy, Sravan Bodapati, Katrin Kirchhoff
In this paper, we investigate various techniques to improve contextualization, content word robustness and domain adaptation of a Transformer-XL neural language model (NLM) to rescore ASR N-best hypotheses.
no code implementations • 10 Jun 2021 • Scott Seyfarth, Sundararajan Srinivasan, Katrin Kirchhoff
Determining the cause of diarization errors is difficult because speaker voice acoustics and conversation structure co-vary, and the interactions between acoustics, conversational structure, and diarization accuracy are complex.
no code implementations • 21 Apr 2021 • Ashish Shenoy, Sravan Bodapati, Monica Sunkara, Srikanth Ronanki, Katrin Kirchhoff
Neural Language Models (NLM), when trained and evaluated with context spanning multiple utterances, have been shown to consistently outperform both conventional n-gram language models and NLMs that use limited context.
no code implementations • 18 Mar 2021 • Ashish Shenoy, Sravan Bodapati, Katrin Kirchhoff
In this paper, we explore different ways to incorporate context into a LSTM based NLM in order to model long range dependencies and improve speech recognition.
no code implementations • 12 Feb 2021 • Monica Sunkara, Chaitanya Shivade, Sravan Bodapati, Katrin Kirchhoff
We propose an efficient and robust neural solution for ITN leveraging transformer based seq2seq models and FST-based text normalization techniques for data preparation.
no code implementations • 30 Nov 2020 • Siddharth Dalmia, Yuzong Liu, Srikanth Ronanki, Katrin Kirchhoff
We live in a world where 60% of the population can speak two or more languages fluently.
no code implementations • NAACL 2021 • Ethan A. Chi, Julian Salazar, Katrin Kirchhoff
Non-autoregressive models greatly improve decoding speed over typical sequence-to-sequence models, but suffer from degraded performance.
no code implementations • 3 Aug 2020 • Monica Sunkara, Srikanth Ronanki, Dhanush Bekal, Sravan Bodapati, Katrin Kirchhoff
Experiments conducted on the Fisher corpus show that our proposed approach achieves ~6-9% and ~3-4% absolute improvement (F1 score) over the baseline BLSTM model on reference transcripts and ASR outputs respectively.
no code implementations • WS 2020 • Monica Sunkara, Srikanth Ronanki, Kalpit Dixit, Sravan Bodapati, Katrin Kirchhoff
We also present techniques for domain and task specific adaptation by fine-tuning masked language models with medical domain data.
1 code implementation • 3 Dec 2019 • Shaoshi Ling, Yuzong Liu, Julian Salazar, Katrin Kirchhoff
We propose a novel approach to semi-supervised automatic speech recognition (ASR).
5 code implementations • ACL 2020 • Julian Salazar, Davis Liang, Toan Q. Nguyen, Katrin Kirchhoff
Instead, we evaluate MLMs out of the box via their pseudo-log-likelihood scores (PLLs), which are computed by masking tokens one by one.
1 code implementation • 30 Jun 2019 • Shaoshi Ling, Julian Salazar, Yuzong Liu, Katrin Kirchhoff
We introduce BERTphone, a Transformer encoder trained on large speech corpora that outputs phonetically-aware contextual representation vectors that can be used for both speaker and language recognition.
no code implementations • WS 2019 • Arshit Gupta, John Hewitt, Katrin Kirchhoff
With the advent of conversational assistants, like Amazon Alexa, Google Now, etc., dialogue systems are gaining a lot of traction, especially in industrial setting.
1 code implementation • 22 Jan 2019 • Julian Salazar, Katrin Kirchhoff, Zhiheng Huang
The success of self-attention in NLP has led to recent applications in end-to-end encoder-decoder architectures for speech recognition.
no code implementations • WS 2018 • Angli Liu, Katrin Kirchhoff
Out-of-vocabulary word translation is a major problem for the translation of low-resource languages that suffer from a lack of parallel training data.
no code implementations • 4 Oct 2017 • Heike Adel, Ngoc Thang Vu, Katrin Kirchhoff, Dominic Telaar, Tanja Schultz
The experimental results reveal that Brown word clusters, part-of-speech tags and open-class words are the most effective at reducing the perplexity of factored language models on the Mandarin-English Code-Switching corpus SEAME.
no code implementations • 7 Sep 2015 • Katrin Kirchhoff, Bing Zhao, Wen Wang
Statistical machine translation for dialectal Arabic is characterized by a lack of data since data acquisition involves the transcription and translation of spoken language.