Search Results for author: Yashish M. Siriwardena

Found 9 papers, 1 papers with code

A multi-modal approach for identifying schizophrenia using cross-modal attention

no code implementations26 Sep 2023 Gowtham Premananth, Yashish M. Siriwardena, Philip Resnik, Carol Espy-Wilson

This study focuses on how different modalities of human communication can be used to distinguish between healthy controls and subjects with schizophrenia who exhibit strong positive symptoms.

Improving Speech Inversion Through Self-Supervised Embeddings and Enhanced Tract Variables

no code implementations17 Sep 2023 Ahmed Adel Attia, Yashish M. Siriwardena, Carol Espy-Wilson

The performance of deep learning models depends significantly on their capacity to encode input features efficiently and decode them into meaningful outputs.

Self-Supervised Learning

Speaker-independent Speech Inversion for Estimation of Nasalance

1 code implementation31 May 2023 Yashish M. Siriwardena, Carol Espy-Wilson, Suzanne Boyce, Mark K. Tiede, Liran Oren

Nasalance is an objective measure derived from the oral and nasal acoustic signals that correlate with nasality.

Learning to Compute the Articulatory Representations of Speech with the MIRRORNET

no code implementations29 Oct 2022 Yashish M. Siriwardena, Carol Espy-Wilson, Shihab Shamma

Most organisms including humans function by coordinating and integrating sensory signals with motor actions to survive and accomplish desired tasks.

The Secret Source : Incorporating Source Features to Improve Acoustic-to-Articulatory Speech Inversion

no code implementations29 Oct 2022 Yashish M. Siriwardena, Carol Espy-Wilson

The proposed SI system with the HPRC dataset gains an improvement of close to 28% when the source features are used as additional targets.

Acoustic-to-articulatory Speech Inversion with Multi-task Learning

no code implementations27 May 2022 Yashish M. Siriwardena, Ganesh Sivaraman, Carol Espy-Wilson

Multi-task learning (MTL) frameworks have proven to be effective in diverse speech related tasks like automatic speech recognition (ASR) and speech emotion recognition.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Audio Data Augmentation for Acoustic-to-articulatory Speech Inversion using Bidirectional Gated RNNs

no code implementations25 May 2022 Yashish M. Siriwardena, Ahmed Adel Attia, Ganesh Sivaraman, Carol Espy-Wilson

In this work, we compare and contrast different ways of doing data augmentation and show how this technique improves the performance of articulatory speech inversion not only on noisy speech, but also on clean speech data.

Data Augmentation

The Mirrornet : Learning Audio Synthesizer Controls Inspired by Sensorimotor Interaction

no code implementations12 Oct 2021 Yashish M. Siriwardena, Guilhem Marion, Shihab Shamma

Experiments to understand the sensorimotor neural interactions in the human cortical speech system support the existence of a bidirectional flow of interactions between the auditory and motor regions.

Autonomous Vehicles

Multimodal Approach for Assessing Neuromotor Coordination in Schizophrenia Using Convolutional Neural Networks

no code implementations9 Oct 2021 Yashish M. Siriwardena, Chris Kitchen, Deanna L. Kelly, Carol Espy-Wilson

This study investigates the speech articulatory coordination in schizophrenia subjects exhibiting strong positive symptoms (e. g. hallucinations and delusions), using two distinct channel-delay correlation methods.

Cannot find the paper you are looking for? You can Submit a new open access paper.