no code implementations • LREC 2014 • Dietmar Schabus, Michael Pucher, Phil Hoole
In this paper, we describe and analyze a corpus of speech data that we have recorded in multiple modalities simultaneously: facial motion via optical motion capturing, tongue motion via electro-magnetic articulography, as well as conventional video and high-quality audio.
no code implementations • LREC 2012 • Dietmar Schabus, Michael Pucher, Gregor Hofer
We have created a synchronous corpus of acoustic and 3D facial marker data from multiple speakers for adaptive audio-visual text-to-speech synthesis.