Search Results for author: Julie Liss

Found 8 papers, 2 papers with code

A Speech Production Model for Radar: Connecting Speech Acoustics with Radar-Measured Vibrations

no code implementations19 Mar 2025 Isabella Lenz, Yu Rong, Daniel Bliss, Julie Liss, Visar Berisha

Millimeter Wave (mmWave) radar has emerged as a promising modality for speech sensing, offering advantages over traditional microphones.

Speech Enhancement

Automated Extraction of Spatio-Semantic Graphs for Identifying Cognitive Impairment

no code implementations2 Feb 2025 Si-Ioi Ng, Pranav S. Ambadi, Kimberly D. Mueller, Julie Liss, Visar Berisha

These results highlight the potential of the automated approach for extracting spatio-semantic features in developing clinical speech models for cognitive impairment assessment.

Applications of Artificial Intelligence for Cross-language Intelligibility Assessment of Dysarthric Speech

no code implementations27 Jan 2025 Eunjung Yeo, Julie Liss, Visar Berisha, David Mortensen

Purpose: Speech intelligibility is a critical outcome in the assessment and management of dysarthria, yet most research and clinical practices have focused on English, limiting their applicability across languages.

A Tutorial on Clinical Speech AI Development: From Data Collection to Model Validation

no code implementations29 Oct 2024 Si-Ioi Ng, Lingfeng Xu, Ingo Siegert, NIcholas Cummins, Nina R. Benway, Julie Liss, Visar Berisha

Specifically, this paper will cover the design of speech elicitation tasks and protocols most appropriate for different clinical conditions, collection of data and verification of hardware, development and validation of speech representations designed to measure clinical constructs of interest, development of reliable and robust clinical prediction models, and ethical and participant considerations for clinical speech AI.

Diagnostic

Requirements for Mass Adoption of Assistive Listening Technology by the General Public

no code implementations4 Mar 2023 Thomas B. Kaufmann, Mehdi Foroogozar, Julie Liss, Visar Berisha

Assistive listening systems (ALSs) dramatically increase speech intelligibility and reduce listening effort.

Robust Vocal Quality Feature Embeddings for Dysphonic Voice Detection

1 code implementation17 Nov 2022 Jianwei Zhang, Julie Liss, Suren Jayasuriya, Visar Berisha

In this paper, we propose a deep learning framework for generating acoustic feature embeddings sensitive to vocal quality and robust across different corpora.

Cross-corpus

TorchDIVA: An Extensible Computational Model of Speech Production built on an Open-Source Machine Learning Library

1 code implementation17 Oct 2022 Sean Kinahan, Julie Liss, Visar Berisha

The DIVA model is a computational model of speech motor control that combines a simulation of the brain regions responsible for speech production with a model of the human vocal tract.

Robust Estimation of Hypernasality in Dysarthria with Acoustic Model Likelihood Features

no code implementations26 Nov 2019 Michael Saxon, Ayush Tripathi, Yishan Jiao, Julie Liss, Visar Berisha

To demonstrate that the features derived from these acoustic models are specific to hypernasal speech, we evaluate them across different dysarthria corpora.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.