no code implementations • LREC 2022 • Karen Jones, Kevin Walker, Christopher Caruso, Jonathan Wright, Stephanie Strassel
The WeCanTalk (WCT) Corpus is a new multi-language, multi-modal resource for speaker recognition.
no code implementations • LREC 2020 • Karen Jones, Stephanie Strassel, Kevin Walker, Jonathan Wright
Speakers used a variety of handsets, including landline and mobile devices, and made VoIP calls from tablets or computers.
no code implementations • LREC 2020 • Dana Delgado, Kevin Walker, Stephanie Strassel, Karen Jones, Christopher Caruso, David Graff
We introduce a new resource, the SAFE-T (Speech Analysis for Emergency Response Technology) Corpus, designed to simulate first-responder communications by inducing high vocal effort and urgent speech with situational background noise in a game-based collection protocol.
no code implementations • LREC 2016 • Karen Jones, Stephanie Strassel, Kevin Walker, David Graff, Jonathan Wright
The Multi-language Speech (MLS) Corpus supports NIST{'}s Language Recognition Evaluation series by providing new conversational telephone speech and broadcast narrowband data in 20 languages/dialects.
no code implementations • LREC 2014 • David Graff, Kevin Walker, Stephanie Strassel, Xiaoyi Ma, Karen Jones, Ann Sawyer
The DARPA RATS program was established to foster development of language technology systems that can perform well on speaker-to-speaker communications over radio channels that evince a wide range in the type and extent of signal variability and acoustic degradation.