Search Results for author: Tassadaq Hussain

Found 6 papers, 1 papers with code

Audio-Visual Speech Enhancement in Noisy Environments via Emotion-Based Contextual Cues

no code implementations26 Feb 2024 Tassadaq Hussain, Kia Dashtipour, Yu Tsao, Amir Hussain

By integrating emotional features, the proposed system achieves significant improvements in both objective and subjective assessments of speech quality and intelligibility, especially in challenging noise environments.

Speech Enhancement

Audio-Visual Speech Enhancement and Separation by Utilizing Multi-Modal Self-Supervised Embeddings

no code implementations31 Oct 2022 I-Chun Chern, Kuo-Hsuan Hung, Yi-Ting Chen, Tassadaq Hussain, Mandar Gogate, Amir Hussain, Yu Tsao, Jen-Cheng Hou

In summary, our results confirm the effectiveness of our proposed model for the AVSS task with proper fine-tuning strategies, demonstrating that multi-modal self-supervised embeddings obtained from AV-HuBERT can be generalized to audio-visual regression tasks.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +6

A Novel Speech Intelligibility Enhancement Model based on CanonicalCorrelation and Deep Learning

no code implementations11 Feb 2022 Tassadaq Hussain, Muhammad Diyan, Mandar Gogate, Kia Dashtipour, Ahsan Adeel, Yu Tsao, Amir Hussain

Current deep learning (DL) based approaches to speech intelligibility enhancement in noisy environments are often trained to minimise the feature distance between noise-free speech and enhanced speech signals.

Speech Enhancement

A Speech Intelligibility Enhancement Model based on Canonical Correlation and Deep Learning for Hearing-Assistive Technologies

no code implementations8 Feb 2022 Tassadaq Hussain, Muhammad Diyan, Mandar Gogate, Kia Dashtipour, Ahsan Adeel, Yu Tsao, Amir Hussain

Current deep learning (DL) based approaches to speech intelligibility enhancement in noisy environments are generally trained to minimise the distance between clean and enhanced speech features.

Speech Enhancement

A Novel Temporal Attentive-Pooling based Convolutional Recurrent Architecture for Acoustic Signal Enhancement

no code implementations24 Jan 2022 Tassadaq Hussain, Wei-Chien Wang, Mandar Gogate, Kia Dashtipour, Yu Tsao, Xugang Lu, Adeel Ahsan, Amir Hussain

To address this problem, we propose to integrate a novel temporal attentive-pooling (TAP) mechanism into a conventional convolutional recurrent neural network, termed as TAP-CRNN.

Towards Intelligibility-Oriented Audio-Visual Speech Enhancement

1 code implementation18 Nov 2021 Tassadaq Hussain, Mandar Gogate, Kia Dashtipour, Amir Hussain

To the best of our knowledge, this is the first work that exploits the integration of AV modalities with an I-O based loss function for SE.

Speech Enhancement

Cannot find the paper you are looking for? You can Submit a new open access paper.