Search Results for author: Alejandrina Cristia

Found 9 papers, 8 papers with code

BabySLM: language-acquisition-friendly benchmark of self-supervised spoken language models

1 code implementation2 Jun 2023 Marvin Lavechin, Yaya Sy, Hadrien Titeux, María Andrea Cruz Blandón, Okko Räsänen, Hervé Bredin, Emmanuel Dupoux, Alejandrina Cristia

Self-supervised techniques for learning speech representations have been shown to develop linguistic competence from exposure to speech without the need for human labels.

Benchmarking Language Acquisition

Analysing the Impact of Audio Quality on the Use of Naturalistic Long-Form Recordings for Infant-Directed Speech Research

1 code implementation3 May 2023 María Andrea Cruz Blandón, Alejandrina Cristia, Okko Räsänen

Our results show that the use of modest and high audio quality naturalistic speech data result in largely similar conclusions on IDS and ADS in terms of acoustic analyses and modelling experiments.

Language Acquisition Self-Supervised Learning

An open-source voice type classifier for child-centered daylong recordings

1 code implementation26 May 2020 Marvin Lavechin, Ruben Bousbib, Hervé Bredin, Emmanuel Dupoux, Alejandrina Cristia

Spontaneous conversations in real-world settings such as those found in child-centered recordings have been shown to be amongst the most challenging audio files to process.

Language Acquisition Vocal Bursts Type Prediction

The Second DIHARD Diarization Challenge: Dataset, task, and baselines

1 code implementation18 Jun 2019 Neville Ryant, Kenneth Church, Christopher Cieri, Alejandrina Cristia, Jun Du, Sriram Ganapathy, Mark Liberman

This paper introduces the second DIHARD challenge, the second in a series of speaker diarization challenges intended to improve the robustness of diarization systems to variation in recording equipment, noise conditions, and conversational domain.

Action Detection Activity Detection +5

Are words easier to learn from infant- than adult-directed speech? A quantitative corpus-based investigation

no code implementations23 Dec 2017 Adriana Guevara-Rukoz, Alejandrina Cristia, Bogdan Ludusan, Roland Thiollière, Andrew Martin, Reiko Mazuka, Emmanuel Dupoux

At the acoustic level we show that, as has been documented before for phonemes, the realizations of words are more variable and less discriminable in IDS than in ADS.

Cannot find the paper you are looking for? You can Submit a new open access paper.