Search Results for author: Patrik Jonell

Found 7 papers, 2 papers with code

Let's Face It: Probabilistic Multi-modal Interlocutor-aware Generation of Facial Gestures in Dyadic Settings

1 code implementation11 Jun 2020 Patrik Jonell, Taras Kucherenko, Gustav Eje Henter, Jonas Beskow

Our contributions are: a) a method for feature extraction from multi-party video and speech recordings, resulting in a representation that allows for independent control and manipulation of expression and speech articulation in a 3D avatar; b) an extension to MoGlow, a recent motion-synthesis method based on normalizing flows, to also take multi-modal signals from the interlocutor as input and subsequently output interlocutor-aware facial gestures; and c) a subjective evaluation assessing the use and relative importance of the input modalities.

Motion Synthesis

Machine Learning and Social Robotics for Detecting Early Signs of Dementia

no code implementations5 Sep 2017 Patrik Jonell, Joseph Mendelson, Thomas Storskog, Goran Hagman, Per Ostberg, Iolanda Leite, Taras Kucherenko, Olga Mikheeva, Ulrika Akenine, Vesna Jelic, Alina Solomon, Jonas Beskow, Joakim Gustafson, Miia Kivipelto, Hedvig Kjellstrom

This paper presents the EACare project, an ambitious multi-disciplinary collaboration with the aim to develop an embodied system, capable of carrying out neuropsychological tests to detect early signs of dementia, e. g., due to Alzheimer's disease.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.