Search Results for author: Michel Valstar

Found 25 papers, 6 papers with code

REACT 2024: the Second Multiple Appropriate Facial Reaction Generation Challenge

1 code implementation10 Jan 2024 Siyang Song, Micol Spitale, Cheng Luo, Cristina Palmero, German Barquero, Hengde Zhu, Sergio Escalera, Michel Valstar, Tobias Baur, Fabien Ringeval, Elisabeth Andre, Hatice Gunes

In dyadic interactions, humans communicate their intentions and state of mind using verbal and non-verbal cues, where multiple different facial reactions might be appropriate in response to a specific speaker behaviour.

REACT2023: the first Multi-modal Multiple Appropriate Facial Reaction Generation Challenge

1 code implementation11 Jun 2023 Siyang Song, Micol Spitale, Cheng Luo, German Barquero, Cristina Palmero, Sergio Escalera, Michel Valstar, Tobias Baur, Fabien Ringeval, Elisabeth Andre, Hatice Gunes

The Multi-modal Multiple Appropriate Facial Reaction Generation Challenge (REACT2023) is the first competition event focused on evaluating multimedia processing and machine learning techniques for generating human-appropriate facial reactions in various dyadic interaction scenarios, with all participants competing strictly under the same conditions.

Are 3D Face Shapes Expressive Enough for Recognising Continuous Emotions and Action Unit Intensities?

no code implementations3 Jul 2022 Mani Kumar Tellamekala, Ömer Sümer, Björn W. Schuller, Elisabeth André, Timo Giesbrecht, Michel Valstar

We also study how 3D face shapes performed on AU intensity estimation on BP4D and DISFA datasets, and report that 3D face features were on par with 2D appearance features in AUs 4, 6, 10, 12, and 25, but not the entire set of AUs.

3D Face Alignment Arousal Estimation +1

Continuous-Time Audiovisual Fusion with Recurrence vs. Attention for In-The-Wild Affect Recognition

no code implementations24 Mar 2022 Vincent Karas, Mani Kumar Tellamekala, Adria Mallol-Ragolta, Michel Valstar, Björn W. Schuller

To clearly understand the performance differences between recurrent and attention models in audiovisual affect recognition, we present a comprehensive evaluation of fusion models based on LSTM-RNNs, self-attention and cross-modal attention, trained for valence and arousal estimation.

Arousal Estimation Multimodal Emotion Recognition

Two-stage Temporal Modelling Framework for Video-based Depression Recognition using Graph Representation

no code implementations30 Nov 2021 Jiaqi Xu, Siyang Song, Keerthy Kusumam, Hatice Gunes, Michel Valstar

The short-term depressive behaviour modelling stage first deep learns depression-related facial behavioural features from multiple short temporal scales, where a Depression Feature Enhancement (DFE) module is proposed to enhance the depression-related clues for all temporal scales and remove non-depression noises.

Learning Graph Representation of Person-specific Cognitive Processes from Audio-visual Behaviours for Automatic Personality Recognition

no code implementations26 Oct 2021 Siyang Song, Zilong Shao, Shashank Jaiswal, Linlin Shen, Michel Valstar, Hatice Gunes

This approach builds on two following findings in cognitive science: (i) human cognition partially determines expressed behaviour and is directly linked to true personality traits; and (ii) in dyadic interactions individuals' nonverbal behaviours are influenced by their conversational partner behaviours.

Neural Architecture Search

A Transfer Learning approach to Heatmap Regression for Action Unit intensity estimation

no code implementations14 Apr 2020 Ioanna Ntinou, Enrique Sanchez, Adrian Bulat, Michel Valstar, Georgios Tzimiropoulos

Action Units (AUs) are geometrically-based atomic facial muscle movements known to produce appearance changes at specific facial locations.

Face Alignment regression +1

A recurrent cycle consistency loss for progressive face-to-face synthesis

1 code implementation14 Apr 2020 Enrique Sanchez, Michel Valstar

To the best of our knowledge, we are the first to propose a loss to overcome the limitation of the cycle consistency loss, and the first to propose an ''in-the-wild'' landmark guided synthesis approach.

Face Generation

EMOPAIN Challenge 2020: Multimodal Pain Evaluation from Facial and Bodily Expressions

no code implementations21 Jan 2020 Joy O. Egede, Siyang Song, Temitayo A. Olugbade, Chongyang Wang, Amanda Williams, Hongy-ing Meng, Min Aung, Nicholas D. Lane, Michel Valstar, Nadia Bianchi-Berthouze

The EmoPain 2020 Challenge is the first international competition aimed at creating a uniform platform for the comparison of machine learning and multimedia processing methods of automatic chronic pain assessment from human expressive behaviour, and also the identification of pain-related behaviours.

Emotion Recognition

AVEC 2019 Workshop and Challenge: State-of-Mind, Detecting Depression with AI, and Cross-Cultural Affect Recognition

no code implementations10 Jul 2019 Fabien Ringeval, Björn Schuller, Michel Valstar, NIcholas Cummins, Roddy Cowie, Leili Tavabi, Maximilian Schmitt, Sina Alisamir, Shahin Amiriparian, Eva-Maria Messner, Siyang Song, Shuo Liu, Ziping Zhao, Adria Mallol-Ragolta, Zhao Ren, Mohammad Soleymani, Maja Pantic

The Audio/Visual Emotion Challenge and Workshop (AVEC 2019) "State-of-Mind, Detecting Depression with AI, and Cross-cultural Affect Recognition" is the ninth competition event aimed at the comparison of multimedia processing and machine learning methods for automatic audiovisual health and emotion analysis, with all participants competing strictly under the same conditions.

Emotion Recognition

Inferring Dynamic Representations of Facial Actions from a Still Image

no code implementations4 Apr 2019 Siyang Song, Enrique Sánchez-Lozano, Linlin Shen, Alan Johnston, Michel Valstar

We present a novel approach to capture multiple scales of such temporal dynamics, with an application to facial Action Unit (AU) intensity estimation and dimensional affect estimation.

Triple consistency loss for pairing distributions in GAN-based face synthesis

1 code implementation8 Nov 2018 Enrique Sanchez, Michel Valstar

To show this is effective, we incorporate the triple consistency loss into the training of a new landmark-guided face to face synthesis, where, contrary to previous works, the generated images can simultaneously undergo a large transformation in both expression and pose.

Attribute Face Generation +2

Joint Action Unit localisation and intensity estimation through heatmap regression

1 code implementation9 May 2018 Enrique Sanchez-Lozano, Georgios Tzimiropoulos, Michel Valstar

Contrary to previous works that try to learn an unsupervised representation of the Action Unit regions, we propose to directly and jointly estimate all AU intensities through heatmap regression, along with the location in the face where they cause visible changes.

regression

Fusing Deep Learned and Hand-Crafted Features of Appearance, Shape, and Dynamics for Automatic Pain Estimation

no code implementations17 Jan 2017 Joy Egede, Michel Valstar, Brais Martinez

Automatic continuous time, continuous value assessment of a patient's pain from face video is highly sought after by the medical profession.

Time Series Time Series Analysis

A Functional Regression approach to Facial Landmark Tracking

no code implementations7 Dec 2016 Enrique Sánchez-Lozano, Georgios Tzimiropoulos, Brais Martinez, Fernando de la Torre, Michel Valstar

This paper presents a Functional Regression solution to the least squares problem, which we coin Continuous Regression, resulting in the first real-time incremental face tracker.

Face Detection Incremental Learning +2

Automatic Detection of ADHD and ASD from Expressive Behaviour in RGBD Data

no code implementations7 Dec 2016 Shashank Jaiswal, Michel Valstar, Alinda Gillott, David Daley

In this work, we present a novel methodology to aid diagnostic predictions about the presence/absence of ADHD and ASD by automatic visual analysis of a person's behaviour.

A CNN Cascade for Landmark Guided Semantic Part Segmentation

no code implementations30 Sep 2016 Aaron Jackson, Michel Valstar, Georgios Tzimiropoulos

This paper proposes a CNN cascade for semantic part segmentation guided by pose-specific information encoded in terms of a set of landmarks (or keypoints).

Pose Estimation Segmentation

Cascaded Continuous Regression for Real-time Incremental Face Tracking

no code implementations3 Aug 2016 Enrique Sánchez-Lozano, Brais Martinez, Georgios Tzimiropoulos, Michel Valstar

We then derive the incremental learning updates for CCR (iCCR) and show that it is an order of magnitude faster than standard incremental learning for cascaded regression, bringing the time required for the update from seconds down to a fraction of a second, thus enabling real-time tracking.

Face Alignment Incremental Learning +2

AVEC 2016 - Depression, Mood, and Emotion Recognition Workshop and Challenge

no code implementations5 May 2016 Michel Valstar, Jonathan Gratch, Bjorn Schuller, Fabien Ringeval, Denis Lalanne, Mercedes Torres Torres, Stefan Scherer, Guiota Stratou, Roddy Cowie, Maja Pantic

The Audio/Visual Emotion Challenge and Workshop (AVEC 2016) "Depression, Mood and Emotion" will be the sixth competition event aimed at comparison of multimedia processing and machine learning methods for automatic audio, visual and physiological depression and emotion analysis, with all participants competing under strictly the same conditions.

Emotion Recognition

TRIC-track: Tracking by Regression With Incrementally Learned Cascades

no code implementations ICCV 2015 Xiaomeng Wang, Michel Valstar, Brais Martinez, Muhammad Haris Khan, Tony Pridmore

This paper proposes a novel approach to part-based tracking by replacing local matching of an appearance model by direct prediction of the displacement between local image patches and part locations.

Incremental Learning regression

Learning to Transfer: Transferring Latent Task Structures and Its Application to Person-Specific Facial Action Unit Detection

no code implementations ICCV 2015 Timur Almaev, Brais Martinez, Michel Valstar

We thus consider a novel problem: all AU models for the target subject are to be learnt using person-specific annotated data for a reference AU (AU12 in our case), and no data or little data regarding the target AU.

Action Unit Detection Facial Action Unit Detection +1

Cannot find the paper you are looking for? You can Submit a new open access paper.