no code implementations • 19 Mar 2025 • Isabella Lenz, Yu Rong, Daniel Bliss, Julie Liss, Visar Berisha
Millimeter Wave (mmWave) radar has emerged as a promising modality for speech sensing, offering advantages over traditional microphones.
no code implementations • 2 Feb 2025 • Si-Ioi Ng, Pranav S. Ambadi, Kimberly D. Mueller, Julie Liss, Visar Berisha
These results highlight the potential of the automated approach for extracting spatio-semantic features in developing clinical speech models for cognitive impairment assessment.
no code implementations • 27 Jan 2025 • Eunjung Yeo, Julie Liss, Visar Berisha, David Mortensen
Purpose: Speech intelligibility is a critical outcome in the assessment and management of dysarthria, yet most research and clinical practices have focused on English, limiting their applicability across languages.
no code implementations • 29 Oct 2024 • Si-Ioi Ng, Lingfeng Xu, Ingo Siegert, NIcholas Cummins, Nina R. Benway, Julie Liss, Visar Berisha
Specifically, this paper will cover the design of speech elicitation tasks and protocols most appropriate for different clinical conditions, collection of data and verification of hardware, development and validation of speech representations designed to measure clinical constructs of interest, development of reliable and robust clinical prediction models, and ethical and participant considerations for clinical speech AI.
no code implementations • 4 Mar 2023 • Thomas B. Kaufmann, Mehdi Foroogozar, Julie Liss, Visar Berisha
Assistive listening systems (ALSs) dramatically increase speech intelligibility and reduce listening effort.
1 code implementation • 17 Nov 2022 • Jianwei Zhang, Julie Liss, Suren Jayasuriya, Visar Berisha
In this paper, we propose a deep learning framework for generating acoustic feature embeddings sensitive to vocal quality and robust across different corpora.
1 code implementation • 17 Oct 2022 • Sean Kinahan, Julie Liss, Visar Berisha
The DIVA model is a computational model of speech motor control that combines a simulation of the brain regions responsible for speech production with a model of the human vocal tract.
no code implementations • 26 Nov 2019 • Michael Saxon, Ayush Tripathi, Yishan Jiao, Julie Liss, Visar Berisha
To demonstrate that the features derived from these acoustic models are specific to hypernasal speech, we evaluate them across different dysarthria corpora.