Search Results for author: Nancy L. McElwain

Found 4 papers, 1 papers with code

Sound Tagging in Infant-centric Home Soundscapes

no code implementations25 Jun 2024 Mohammad Nur Hossain Khan, Jialu Li, Nancy L. McElwain, Mark Hasegawa-Johnson, Bashima Islam

Further, many of these works ignore infants or young children in the environment or have data collected from only a single family where noise from the fixed sound source can be moderate at the infant's position or vice versa.

Data Augmentation Event Detection

Analysis of Self-Supervised Speech Models on Children's Speech and Infant Vocalizations

no code implementations10 Feb 2024 Jialu Li, Mark Hasegawa-Johnson, Nancy L. McElwain

To understand why self-supervised learning (SSL) models have empirically achieved strong performances on several speech-processing downstream tasks, numerous studies have focused on analyzing the encoded information of the SSL layer representations in adult speech.

Self-Supervised Learning

Towards Robust Family-Infant Audio Analysis Based on Unsupervised Pretraining of Wav2vec 2.0 on Large-Scale Unlabeled Family Audio

no code implementations21 May 2023 Jialu Li, Mark Hasegawa-Johnson, Nancy L. McElwain

To perform automatic family audio analysis, past studies have collected recordings using phone, video, or audio-only recording devices like LENA, investigated supervised learning methods, and used or fine-tuned general-purpose embeddings learned from large pretrained models.

speaker-diarization Speaker Diarization

Visualizations of Complex Sequences of Family-Infant Vocalizations Using Bag-of-Audio-Words Approach Based on Wav2vec 2.0 Features

1 code implementation29 Mar 2022 Jialu Li, Mark Hasegawa-Johnson, Nancy L. McElwain

We demonstrate that our high-quality visualizations capture major types of family vocalization interactions, in categories indicative of mental, behavioral, and developmental health, for both labeled and unlabeled LB audio.

speaker-diarization Speaker Diarization

Cannot find the paper you are looking for? You can Submit a new open access paper.