no code implementations • 26 Oct 2022 • Camille Noufi, Jonathan Berger, Karen J. Parker, Daniel L. Bowling
In this paper, we propose a method for removing linguistic information from speech for the purpose of isolating paralinguistic indicators of affect.
no code implementations • 9 Sep 2022 • Camille Noufi, Adam C. Lammert, Daryush D. Mehta, James R. Williamson, Gregory Ciccarelli, Douglas Sturim, Jordan R. Green, Thomas F. Quatieri, Thomas F. Campbell
Recommendations for common outcome measures following pediatric traumatic brain injury (TBI) support the integration of instrumental measurements alongside perceptual assessment in recovery and treatment plans.
no code implementations • 9 Sep 2022 • Camille Noufi, Dejan Markovic, Peter Dodds
This virtual array is used to measure and encode the high-resolution directivity pattern of the speech signal as it evolves dynamically with natural speech and movement.
3 code implementations • 6 Mar 2022 • Joseph Turian, Jordie Shier, Humair Raj Khan, Bhiksha Raj, Björn W. Schuller, Christian J. Steinmetz, Colin Malloy, George Tzanetakis, Gissel Velarde, Kirk McNally, Max Henry, Nicolas Pinto, Camille Noufi, Christian Clough, Dorien Herremans, Eduardo Fonseca, Jesse Engel, Justin Salamon, Philippe Esling, Pranay Manocha, Shinji Watanabe, Zeyu Jin, Yonatan Bisk
The aim of the HEAR benchmark is to develop a general-purpose audio representation that provides a strong basis for learning in a wide variety of tasks and scenarios.
no code implementations • 17 Jul 2020 • Camille Noufi, Prateek Verma
We show how contextual representations of short sung vocal lines can be implicitly learned from fundamental frequency ($F_0$) and thus be used as a meaningful feature space for downstream Music Information Retrieval (MIR) tasks.