no code implementations • 17 Apr 2024 • Leena Mathur, Paul Pu Liang, Louis-Philippe Morency
Building socially-intelligent AI agents (Social-AI) is a multidisciplinary, multimodal research goal that involves creating agents that can sense, perceive, reason about, learn from, and respond to affect, behavior, and cognition of other agents (human or artificial).
1 code implementation • 18 Oct 2023 • Xuhui Zhou, Hao Zhu, Leena Mathur, Ruohong Zhang, Haofei Yu, Zhengyang Qi, Louis-Philippe Morency, Yonatan Bisk, Daniel Fried, Graham Neubig, Maarten Sap
We present SOTOPIA, an open-ended environment to simulate complex social interactions between artificial agents and evaluate their social intelligence.
1 code implementation • 23 May 2023 • Alex Wilf, Syeda Nahida Akter, Leena Mathur, Paul Pu Liang, Sheryl Mathew, Mengrou Shou, Eric Nyberg, Louis-Philippe Morency
The self-supervised objective of masking-and-predicting has led to promising performance gains on a variety of downstream tasks.
no code implementations • 18 May 2023 • Leena Mathur, Maja J Matarić, Louis-Philippe Morency
We find that this body of research has primarily focused on enabling machines to recognize and express affect and emotion.
no code implementations • 31 Jul 2022 • Leena Mathur, Ralph Adolphs, Maja J Matarić
In our multicultural world, affect-aware AI systems that support humans need the ability to perceive affect across variations in emotion expression patterns across cultures.
Causal Discovery Cultural Vocal Bursts Intensity Prediction +1
no code implementations • 27 Aug 2021 • Zane Durante, Leena Mathur, Eric Ye, Sichong Zhao, Tejas Ramdas, Khalil Iskarous
To address this problem in the context of Ladin, our paper presents the first analysis of speech representations and machine learning models for classifying 32 phonemes of Ladin.
no code implementations • 17 Aug 2021 • Leena Mathur, Maja J Matarić
Our results motivate future work on unsupervised, affect-aware computational approaches for detecting deception and other social behaviors in the wild.
1 code implementation • 29 Jul 2021 • Leena Mathur, Micol Spitale, Hao Xi, Jieyun Li, Maja J Matarić
Our research informs and motivates future development of empathy perception models that can be leveraged by virtual and robotic agents during human-machine interactions.
no code implementations • 6 Feb 2021 • Leena Mathur, Maja J Matarić
Our subspace-alignment (SA) approach adapts audio-visual representations of deception in lab-controlled low-stakes scenarios to detect deception in real-world, high-stakes situations.
no code implementations • 31 Aug 2020 • Leena Mathur, Maja J. Matarić
This approach achieved a higher AUC than existing automated machine learning approaches that used interpretable visual, vocal, and verbal features to detect deception in this dataset, but did not use facial affect.