Search Results for author: Yingyi Ma

Found 7 papers, 1 papers with code

Effective internal language model training and fusion for factorized transducer model

no code implementations2 Apr 2024 Jinxi Guo, Niko Moritz, Yingyi Ma, Frank Seide, Chunyang Wu, Jay Mahadeokar, Ozlem Kalinli, Christian Fuegen, Mike Seltzer

However, even with the adoption of factorized transducer models, limited improvement has been observed compared to shallow fusion.

Language Modelling

Correction Focused Language Model Training for Speech Recognition

no code implementations17 Oct 2023 Yingyi Ma, Zhe Liu, Ozlem Kalinli

Language models (LMs) have been commonly adopted to boost the performance of automatic speech recognition (ASR) particularly in domain adaptation tasks.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Recovering from Privacy-Preserving Masking with Large Language Models

no code implementations12 Sep 2023 Arpita Vats, Zhe Liu, Peng Su, Debjyoti Paul, Yingyi Ma, Yutong Pang, Zeeshan Ahmed, Ozlem Kalinli

To effectively perform adaptation, textual data of users is typically stored on servers or their local devices, where downstream natural language processing (NLP) models can be directly trained using such in-domain data.

Language Modelling Privacy Preserving

Contextual Biasing of Named-Entities with Large Language Models

no code implementations1 Sep 2023 Chuanneng Sun, Zeeshan Ahmed, Yingyi Ma, Zhe Liu, Lucas Kabela, Yutong Pang, Ozlem Kalinli

We propose to leverage prompts for a LLM without fine tuning during rescoring which incorporate a biasing list and few-shot examples to serve as additional information when calculating the score for the hypothesis.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Proximal Mapping for Deep Regularization

1 code implementation NeurIPS 2020 Mao Li, Yingyi Ma, Xinhua Zhang

Underpinning the success of deep learning is effective regularizations that allow a variety of priors in data to be modeled.

Cannot find the paper you are looking for? You can Submit a new open access paper.