no code implementations • 19 Dec 2023 • Payam Jome Yazdian, Eric Liu, Li Cheng, Angelica Lim
This paper proposes MotionScript, a motion-to-text conversion algorithm and natural language representation for human body motions.
no code implementations • 30 Oct 2023 • Yasaman Etesam, Ozge Nilay Yalcin, Chuxuan Zhang, Angelica Lim
Nevertheless, a gap remains in the zero-shot emotional theory of mind task compared to prior work trained on the EMOTIC dataset.
no code implementations • 22 Sep 2023 • Vera Yang, Archita Srivastava, Yasaman Etesam, Chuxuan Zhang, Angelica Lim
In this paper, we explore whether Large Language Models (LLMs) can support the contextual emotion estimation task, by first captioning images, then using an LLM for inference.
no code implementations • 15 Aug 2022 • Saba Akhyani, Mehryar Abbasi Boroujeni, Mo Chen, Angelica Lim
Robots and artificial agents that interact with humans should be able to do so without bias and inequity, but facial perception systems have notoriously been found to work more poorly for certain groups of people than others.
no code implementations • 10 May 2022 • Paige Tuttosi, Emma Hughson, Akihiro Matsufuji, Angelica Lim
By designing robots to speak in a more social and ambient-appropriate manner we can improve perceived awareness and intelligence for these agents.
1 code implementation • 2 May 2022 • Mina Marmpena, Fernando Garcia, Angelica Lim, Nikolas Hemion, Thomas Wennekers
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration, since humans attribute, and perhaps subconsciously anticipate, such traces to perceive an agent as engaging, trustworthy, and socially present.
1 code implementation • 10 Dec 2021 • Roya Javadi, Angelica Lim
The portrayal of negative emotions such as anger can vary widely between cultures and contexts, depending on the acceptability of expressing full-blown emotions rather than suppression to maintain harmony.
Ranked #1 on Emotion Classification on MFA
Cultural Vocal Bursts Intensity Prediction Emotion Classification +1
1 code implementation • 29 Sep 2021 • Payam Jome Yazdian, Mo Chen, Angelica Lim
We propose a vector-quantized variational autoencoder structure as well as training techniques to learn a rigorous representation of gesture sequences.
no code implementations • 28 Oct 2020 • Zhitian Zhang, Jimin Rhim, Taher Ahmadi, Kefan Yang, Angelica Lim, Mo Chen
This article describes a dataset collected in a set of experiments that involves human participants and a robot.
no code implementations • 30 Aug 2019 • Pablo Barros, Nikhil Churamani, Angelica Lim, Stefan Wermter
In this paper, we propose a novel dataset composed of dyadic interactions designed, collected and annotated with a focus on measuring the affective impact that eight different stories have on the listener.