2 code implementations • 10 Jan 2024 • Siyang Song, Micol Spitale, Cheng Luo, Cristina Palmero, German Barquero, Hengde Zhu, Sergio Escalera, Michel Valstar, Tobias Baur, Fabien Ringeval, Elisabeth Andre, Hatice Gunes
In dyadic interactions, humans communicate their intentions and state of mind using verbal and non-verbal cues, where multiple different facial reactions might be appropriate in response to a specific speaker behaviour.
2 code implementations • 11 Jun 2023 • Siyang Song, Micol Spitale, Cheng Luo, German Barquero, Cristina Palmero, Sergio Escalera, Michel Valstar, Tobias Baur, Fabien Ringeval, Elisabeth Andre, Hatice Gunes
The Multi-modal Multiple Appropriate Facial Reaction Generation Challenge (REACT2023) is the first competition event focused on evaluating multimedia processing and machine learning techniques for generating human-appropriate facial reactions in various dyadic interaction scenarios, with all participants competing strictly under the same conditions.
no code implementations • 3 Jul 2022 • Mani Kumar Tellamekala, Ömer Sümer, Björn W. Schuller, Elisabeth André, Timo Giesbrecht, Michel Valstar
We also study how 3D face shapes performed on AU intensity estimation on BP4D and DISFA datasets, and report that 3D face features were on par with 2D appearance features in AUs 4, 6, 10, 12, and 25, but not the entire set of AUs.
no code implementations • 12 Jun 2022 • Mani Kumar Tellamekala, Shahin Amiriparian, Björn W. Schuller, Elisabeth André, Timo Giesbrecht, Michel Valstar
In particular, we impose Calibration and Ordinal Ranking constraints on the variance vectors of audiovisual latent distributions.
no code implementations • 24 Mar 2022 • Vincent Karas, Mani Kumar Tellamekala, Adria Mallol-Ragolta, Michel Valstar, Björn W. Schuller
To clearly understand the performance differences between recurrent and attention models in audiovisual affect recognition, we present a comprehensive evaluation of fusion models based on LSTM-RNNs, self-attention and cross-modal attention, trained for valence and arousal estimation.
no code implementations • 30 Nov 2021 • Jiaqi Xu, Siyang Song, Keerthy Kusumam, Hatice Gunes, Michel Valstar
The short-term depressive behaviour modelling stage first deep learns depression-related facial behavioural features from multiple short temporal scales, where a Depression Feature Enhancement (DFE) module is proposed to enhance the depression-related clues for all temporal scales and remove non-depression noises.
no code implementations • 26 Oct 2021 • Siyang Song, Zilong Shao, Shashank Jaiswal, Linlin Shen, Michel Valstar, Hatice Gunes
This approach builds on two following findings in cognitive science: (i) human cognition partially determines expressed behaviour and is directly linked to true personality traits; and (ii) in dyadic interactions individuals' nonverbal behaviours are influenced by their conversational partner behaviours.
no code implementations • CVPR 2021 • Enrique Sanchez, Mani Kumar Tellamekala, Michel Valstar, Georgios Tzimiropoulos
Temporal context is key to the recognition of expressions of emotion.
no code implementations • 14 Apr 2020 • Ioanna Ntinou, Enrique Sanchez, Adrian Bulat, Michel Valstar, Georgios Tzimiropoulos
Action Units (AUs) are geometrically-based atomic facial muscle movements known to produce appearance changes at specific facial locations.
1 code implementation • 14 Apr 2020 • Enrique Sanchez, Michel Valstar
To the best of our knowledge, we are the first to propose a loss to overcome the limitation of the cycle consistency loss, and the first to propose an ''in-the-wild'' landmark guided synthesis approach.
no code implementations • 21 Jan 2020 • Joy O. Egede, Siyang Song, Temitayo A. Olugbade, Chongyang Wang, Amanda Williams, Hongy-ing Meng, Min Aung, Nicholas D. Lane, Michel Valstar, Nadia Bianchi-Berthouze
The EmoPain 2020 Challenge is the first international competition aimed at creating a uniform platform for the comparison of machine learning and multimedia processing methods of automatic chronic pain assessment from human expressive behaviour, and also the identification of pain-related behaviours.
no code implementations • 10 Jul 2019 • Fabien Ringeval, Björn Schuller, Michel Valstar, NIcholas Cummins, Roddy Cowie, Leili Tavabi, Maximilian Schmitt, Sina Alisamir, Shahin Amiriparian, Eva-Maria Messner, Siyang Song, Shuo Liu, Ziping Zhao, Adria Mallol-Ragolta, Zhao Ren, Mohammad Soleymani, Maja Pantic
The Audio/Visual Emotion Challenge and Workshop (AVEC 2019) "State-of-Mind, Detecting Depression with AI, and Cross-cultural Affect Recognition" is the ninth competition event aimed at the comparison of multimedia processing and machine learning methods for automatic audiovisual health and emotion analysis, with all participants competing strictly under the same conditions.
no code implementations • 4 Apr 2019 • Siyang Song, Enrique Sánchez-Lozano, Linlin Shen, Alan Johnston, Michel Valstar
We present a novel approach to capture multiple scales of such temporal dynamics, with an application to facial Action Unit (AU) intensity estimation and dimensional affect estimation.
1 code implementation • 8 Nov 2018 • Enrique Sanchez, Michel Valstar
To show this is effective, we incorporate the triple consistency loss into the training of a new landmark-guided face to face synthesis, where, contrary to previous works, the generated images can simultaneously undergo a large transformation in both expression and pose.
1 code implementation • 9 May 2018 • Enrique Sanchez-Lozano, Georgios Tzimiropoulos, Michel Valstar
Contrary to previous works that try to learn an unsupervised representation of the Action Unit regions, we propose to directly and jointly estimate all AU intensities through heatmap regression, along with the location in the face where they cause visible changes.
1 code implementation • 3 May 2018 • Siyang Song, Shuimei Zhang, Björn Schuller, Linlin Shen, Michel Valstar
The performance of speaker-related systems usually degrades heavily in practical applications largely due to the presence of background noise.
no code implementations • 17 Jan 2017 • Joy Egede, Michel Valstar, Brais Martinez
Automatic continuous time, continuous value assessment of a patient's pain from face video is highly sought after by the medical profession.
no code implementations • 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2016 • Sergio Escalera, Mercedes Torres Torres, Brais Martínez, Xavier Baró, Hugo Jair Escalante, Isabelle Guyon, Georgios Tzimiropoulos, Ciprian Corneanu, Marc Oliu, Mohammad Ali Bagheri, Michel Valstar
A custom-build application was used to collect and label data about the apparent age of people (as opposed to the real age).
Ranked #2 on
Gender Prediction
on FotW Gender
no code implementations • 7 Dec 2016 • Shashank Jaiswal, Michel Valstar, Alinda Gillott, David Daley
In this work, we present a novel methodology to aid diagnostic predictions about the presence/absence of ADHD and ASD by automatic visual analysis of a person's behaviour.
no code implementations • 7 Dec 2016 • Enrique Sánchez-Lozano, Georgios Tzimiropoulos, Brais Martinez, Fernando de la Torre, Michel Valstar
This paper presents a Functional Regression solution to the least squares problem, which we coin Continuous Regression, resulting in the first real-time incremental face tracker.
no code implementations • 30 Sep 2016 • Aaron Jackson, Michel Valstar, Georgios Tzimiropoulos
This paper proposes a CNN cascade for semantic part segmentation guided by pose-specific information encoded in terms of a set of landmarks (or keypoints).
no code implementations • 3 Aug 2016 • Enrique Sánchez-Lozano, Brais Martinez, Georgios Tzimiropoulos, Michel Valstar
We then derive the incremental learning updates for CCR (iCCR) and show that it is an order of magnitude faster than standard incremental learning for cascaded regression, bringing the time required for the update from seconds down to a fraction of a second, thus enabling real-time tracking.
no code implementations • 5 May 2016 • Michel Valstar, Jonathan Gratch, Bjorn Schuller, Fabien Ringeval, Denis Lalanne, Mercedes Torres Torres, Stefan Scherer, Guiota Stratou, Roddy Cowie, Maja Pantic
The Audio/Visual Emotion Challenge and Workshop (AVEC 2016) "Depression, Mood and Emotion" will be the sixth competition event aimed at comparison of multimedia processing and machine learning methods for automatic audio, visual and physiological depression and emotion analysis, with all participants competing under strictly the same conditions.
no code implementations • ICCV 2015 • Xiaomeng Wang, Michel Valstar, Brais Martinez, Muhammad Haris Khan, Tony Pridmore
This paper proposes a novel approach to part-based tracking by replacing local matching of an appearance model by direct prediction of the displacement between local image patches and part locations.
no code implementations • ICCV 2015 • Timur Almaev, Brais Martinez, Michel Valstar
We thus consider a novel problem: all AU models for the target subject are to be learnt using person-specific annotated data for a reference AU (AU12 in our case), and no data or little data regarding the target AU.