3 code implementations • 5 Sep 2018 • George Sterpu, Christian Saam, Naomi Harte
Automatic speech recognition can potentially benefit from the lip motion patterns, complementing acoustic speech to improve the overall recognition performance, particularly in noise.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
1 code implementation • 17 Apr 2020 • George Sterpu, Christian Saam, Naomi Harte
A recently proposed multimodal fusion strategy, AV Align, based on state-of-the-art sequence to sequence neural networks, attempts to model this relationship by explicitly aligning the acoustic and visual representations of speech.
1 code implementation • 19 May 2020 • George Sterpu, Christian Saam, Naomi Harte
The audio-visual speech fusion strategy AV Align has shown significant performance improvements in audio-visual speech recognition (AVSR) on the challenging LRS2 dataset.
1 code implementation • 8 Jun 2020 • George Sterpu, Christian Saam, Naomi Harte
Sequence to Sequence models, in particular the Transformer, achieve state of the art results in Automatic Speech Recognition.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
1 code implementation • 14 Dec 2020 • George Sterpu, Naomi Harte
In recent years, Automatic Speech Recognition (ASR) technology has approached human-level performance on conversational speech under relatively clean listening conditions.
no code implementations • 29 May 2018 • George Sterpu, Christian Saam, Naomi Harte
Finding visual features and suitable models for lipreading tasks that are more complex than a well-constrained vocabulary has proven challenging.
no code implementations • 18 Nov 2023 • Gabriel Cosache, Francisco Salgado, Cosmin Rotariu, George Sterpu, Rishabh Jain, Peter Corcoran
An overview is given of the DAVID Smart-Toy platform, one of the first Edge AI platform designs to incorporate advanced low-power data processing by neural inference models co-located with the relevant image or audio sensors.