Search Results for author: Jeffrey F. Cohn

Found 11 papers, 3 papers with code

Learning to Generate Context-Sensitive Backchannel Smiles for Embodied AI Agents with Applications in Mental Health Dialogues

1 code implementation13 Feb 2024 Maneesh Bilalpur, Mert Inan, Dorsa Zeinali, Jeffrey F. Cohn, Malihe Alikhani

To improve the rapport-building capabilities of embodied agents we annotated backchannel smiles in videos of intimate face-to-face conversations over topics such as mental health, illness, and relationships.

Neural Mixed Effects for Nonlinear Personalized Predictions

1 code implementation13 Jun 2023 Torsten Wörtwein, Nicholas Allen, Lisa B. Sheeber, Randy P. Auerbach, Jeffrey F. Cohn, Louis-Philippe Morency

Empirically, we observe that NME improves performance across six unimodal and multimodal datasets, including a smartphone dataset to predict daily mood and a mother-adolescent dataset to predict affective state sequences where half the mothers experience at least moderate symptoms of depression.

Personalized Federated Deep Learning for Pain Estimation From Face Images

1 code implementation12 Jan 2021 Ognjen Rudovic, Nicolas Tobis, Sebastian Kaltwang, Björn Schuller, Daniel Rueckert, Jeffrey F. Cohn, Rosalind W. Picard

A potential approach to tackling this is Federated Learning (FL), which enables multiple parties to collaboratively learn a shared prediction model by using parameters of locally trained models while keeping raw training data locally.

Federated Learning

FERA 2017 - Addressing Head Pose in the Third Facial Expression Recognition and Analysis Challenge

no code implementations14 Feb 2017 Michel F. Valstar, Enrique Sánchez-Lozano, Jeffrey F. Cohn, László A. Jeni, Jeffrey M. Girard, Zheng Zhang, Lijun Yin, Maja Pantic

The FG 2017 Facial Expression Recognition and Analysis challenge (FERA 2017) extends FERA 2015 to the estimation of Action Units occurrence and intensity under different camera views.

Benchmarking Facial Action Unit Detection +4

Modeling Spatial and Temporal Cues for Multi-label Facial Action Unit Detection

no code implementations2 Aug 2016 Wen-Sheng Chu, Fernando de la Torre, Jeffrey F. Cohn

To model temporal dependencies, Long Short-Term Memory (LSTMs) are stacked on top of these representations, regardless of the lengths of input videos.

Action Unit Detection Facial Action Unit Detection

Self-Adaptive Matrix Completion for Heart Rate Estimation From Face Videos Under Realistic Conditions

no code implementations CVPR 2016 Sergey Tulyakov, Xavier Alameda-Pineda, Elisa Ricci, Lijun Yin, Jeffrey F. Cohn, Nicu Sebe

Recent studies in computer vision have shown that, while practically invisible to a human observer, skin color changes due to blood flow can be captured on face videos and, surprisingly, be used to estimate the heart rate (HR).

Heart rate estimation Matrix Completion

Unsupervised Synchrony Discovery in Human Interaction

no code implementations ICCV 2015 Wen-Sheng Chu, Jiabei Zeng, Fernando de la Torre, Jeffrey F. Cohn, Daniel S. Messinger

We evaluate the effectiveness of our approach in multiple databases, including human actions using the CMU Mocap dataset, spontaneous facial behaviors using group-formation task dataset and parent-infant interaction dataset.

Computational Efficiency

Cannot find the paper you are looking for? You can Submit a new open access paper.