1 code implementation • 25 Nov 2020 • Martijn Bartelds, Wietse de Vries, Faraz Sanal, Caitlin Richter, Mark Liberman, Martijn Wieling
We show that speech representations extracted from a specific type of neural model (i. e. Transformers) lead to a better match with human perception than two earlier approaches on the basis of phonetic transcriptions and MFCC-based acoustic features.
no code implementations • LREC 2020 • Justin Mott, Ann Bies, Stephanie Strassel, Jordan Kodner, Caitlin Richter, Hongzhi Xu, Mitchell Marcus
This paper describes a new morphology resource created by Linguistic Data Consortium and the University of Pennsylvania for the DARPA LORELEI Program.
no code implementations • TACL 2017 • Caitlin Richter, Naomi H. Feldman, Harini Salgado, Aren Jansen
We introduce a method for measuring the correspondence between low-level speech features and human perception, using a cognitive model of speech perception implemented directly on speech recordings.
Automatic Speech Recognition (ASR) Representation Learning +1