no code implementations • 4 Jun 2022 • Munender Varshney, Ravindra Yadav, Vinay P. Namboodiri, Rajesh M Hegde
This work aims to understand the correlation/mapping between speech and the sequence of lip movement of individual speakers in an unconstrained and large vocabulary.
no code implementations • 2 May 2022 • Sanjana Sinha, Sandika Biswas, Ravindra Yadav, Brojeshwar Bhowmick
We propose a graph convolutional neural network that uses speech content feature, along with an independent emotion input to generate emotion and speech-induced motion on facial geometry-aware landmark representation.
1 code implementation • 21 Nov 2020 • Ravindra Yadav, Ashish Sardana, Vinay P Namboodiri, Rajesh M Hegde
Indeed, just having the ability to generate a single talking face would make a system almost robotic in nature.
no code implementations • 14 Nov 2020 • Ravindra Yadav, Ashish Sardana, Vinay P Namboodiri, Rajesh M Hegde
Understanding the relationship between the auditory and visual signals is crucial for many different applications ranging from computer-generated imagery (CGI) and video editing automation to assisting people with hearing or visual impairments.