Search Results for author: Hank Liao

Found 13 papers, 2 papers with code

DiarizationLM: Speaker Diarization Post-Processing with Large Language Models

2 code implementations7 Jan 2024 Quan Wang, Yiling Huang, Guanlong Zhao, Evan Clark, Wei Xia, Hank Liao

In this paper, we introduce DiarizationLM, a framework to leverage large language models (LLM) to post-process the outputs from a speaker diarization system.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

On Robustness to Missing Video for Audiovisual Speech Recognition

no code implementations13 Dec 2023 Oscar Chang, Otavio Braga, Hank Liao, Dmitriy Serdyuk, Olivier Siohan

Multi-modal models need to be robust: missing video frames should not degrade the performance of an audiovisual model to be worse than that of a single-modality audio-only model.

speech-recognition Speech Recognition

Towards Word-Level End-to-End Neural Speaker Diarization with Auxiliary Network

no code implementations15 Sep 2023 Yiling Huang, Weiran Wang, Guanlong Zhao, Hank Liao, Wei Xia, Quan Wang

Whether it is the conventional modularized approach or the more recent end-to-end neural diarization (EEND), an additional automatic speech recognition (ASR) model and an orchestration algorithm are required to associate the speaker labels with recognized words.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

USM-SCD: Multilingual Speaker Change Detection Based on Large Pretrained Foundation Models

no code implementations14 Sep 2023 Guanlong Zhao, Yongqiang Wang, Jason Pelecanos, Yu Zhang, Hank Liao, Yiling Huang, Han Lu, Quan Wang

We show that the USM-SCD model can achieve more than 75% average speaker change detection F1 score across a test set that consists of data from 96 languages.

Change Detection

Conformers are All You Need for Visual Speech Recognition

no code implementations17 Feb 2023 Oscar Chang, Hank Liao, Dmitriy Serdyuk, Ankit Shah, Olivier Siohan

We achieve a new state-of-the-art of 12. 8% WER for visual speech recognition on the TED LRS3 dataset, which rivals the performance of audio-only models from just four years ago.

speech-recognition Visual Speech Recognition

End-to-End Multi-Person Audio/Visual Automatic Speech Recognition

no code implementations11 May 2022 Otavio Braga, Takaki Makino, Olivier Siohan, Hank Liao

Traditionally, audio-visual automatic speech recognition has been studied under the assumption that the speaking face on the visual signal is the face matching the audio.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Recurrent Neural Network Transducer for Audio-Visual Speech Recognition

1 code implementation8 Nov 2019 Takaki Makino, Hank Liao, Yannis Assael, Brendan Shillingford, Basilio Garcia, Otavio Braga, Olivier Siohan

This work presents a large-scale audio-visual speech recognition system based on a recurrent neural network transducer (RNN-T) architecture.

Ranked #5 on Audio-Visual Speech Recognition on LRS3-TED (using extra training data)

Audio-Visual Speech Recognition Lipreading +2

Adversarial Training for Multilingual Acoustic Modeling

no code implementations17 Jun 2019 Ke Hu, Hasim Sak, Hank Liao

In this work, we apply the domain adversarial network to encourage the shared layers of a multilingual model to learn language-invariant features.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Neural Language Modeling with Visual Features

no code implementations7 Mar 2019 Antonios Anastasopoulos, Shankar Kumar, Hank Liao

We report analysis that provides insights into why our multimodal language model improves upon a standard RNN language model.

Language Modelling

Large-Scale Visual Speech Recognition

no code implementations ICLR 2019 Brendan Shillingford, Yannis Assael, Matthew W. Hoffman, Thomas Paine, Cían Hughes, Utsav Prabhu, Hank Liao, Hasim Sak, Kanishka Rao, Lorrayne Bennett, Marie Mulville, Ben Coppin, Ben Laurie, Andrew Senior, Nando de Freitas

To achieve this, we constructed the largest existing visual speech recognition dataset, consisting of pairs of text and video clips of faces speaking (3, 886 hours of video).

Ranked #10 on Lipreading on LRS3-TED (using extra training data)

Lipreading speech-recognition +1

Lattice Rescoring Strategies for Long Short Term Memory Language Models in Speech Recognition

no code implementations15 Nov 2017 Shankar Kumar, Michael Nirschl, Daniel Holtmann-Rice, Hank Liao, Ananda Theertha Suresh, Felix Yu

Recurrent neural network (RNN) language models (LMs) and Long Short Term Memory (LSTM) LMs, a variant of RNN LMs, have been shown to outperform traditional N-gram LMs on speech recognition tasks.

speech-recognition Speech Recognition

Neural Speech Recognizer: Acoustic-to-Word LSTM Model for Large Vocabulary Speech Recognition

no code implementations31 Oct 2016 Hagen Soltau, Hank Liao, Hasim Sak

We present results that show it is possible to build a competitive, greatly simplified, large vocabulary continuous speech recognition system with whole words as acoustic units.

Language Modelling speech-recognition +1

Cannot find the paper you are looking for? You can Submit a new open access paper.