no code implementations • EMNLP 2021 • Ishaan Grover, Matthew Huggins, Cynthia Breazeal, Hae Won Park
Recent state-of-the-art approaches in open-domain dialogue include training end-to-end deep-learning models to learn various conversational features like emotional content of response, symbolic transitions of dialogue contexts in a knowledge graph and persona of the agent and the user, among others.
1 code implementation • 31 Oct 2024 • Yubin Kim, Chanwoo Park, Hyewon Jeong, Cristina Grau-Vilchez, Yik Siu Chan, Xuhai Xu, Daniel McDuff, Hyeonhoon Lee, Cynthia Breazeal, Hae Won Park
Medical Decision-Making (MDM) is a multi-faceted process that requires clinicians to assess complex multi-modal patient data patient, often collaboratively.
1 code implementation • 27 May 2024 • Jocelyn Shen, Joel Mire, Hae Won Park, Cynthia Breazeal, Maarten Sap
We introduce a novel, theory-based taxonomy, HEART (Human Empathy and Narrative Taxonomy) that delineates elements of narrative style that can lead to empathy with the narrator of a story.
no code implementations • 24 May 2024 • Jocelyn Shen, Yubin Kim, Mohit Hulse, Wazeer Zulfikar, Sharifa Alghowinem, Cynthia Breazeal, Hae Won Park
Modeling empathy is a complex endeavor that is rooted in interpersonal and experiential dimensions of human interaction, and remains an open problem within AI.
1 code implementation • 22 Apr 2024 • Yubin Kim, Chanwoo Park, Hyewon Jeong, Yik Siu Chan, Xuhai Xu, Daniel McDuff, Hyeonhoon Lee, Marzyeh Ghassemi, Cynthia Breazeal, Hae Won Park
MDAgents achieved the best performance in seven out of ten benchmarks on tasks requiring an understanding of medical knowledge and multi-modal reasoning, showing a significant improvement of up to 4. 2% (p < 0. 05) compared to previous methods' best performances.
no code implementations • 17 Mar 2024 • Dong Won Lee, Hae Won Park, Yoon Kim, Cynthia Breazeal, Louis-Philippe Morency
We describe an approach for aligning an LLM-based dialogue agent based on global (i. e., dialogue-level) rewards, while also taking into account naturally-occurring multimodal signals.
1 code implementation • 12 Jan 2024 • Yubin Kim, Xuhai Xu, Daniel McDuff, Cynthia Breazeal, Hae Won Park
Notably, we observe that our context enhancement can yield up to 23. 8% improvement in performance.
no code implementations • 23 May 2023 • Jocelyn Shen, Maarten Sap, Pedro Colon-Hernandez, Hae Won Park, Cynthia Breazeal
The most meaningful connections between people are often fostered through expression of shared vulnerability and emotional experiences in personal narratives.
no code implementations • 21 May 2023 • Yubin Kim, Dong Won Lee, Paul Pu Liang, Sharifa Algohwinem, Cynthia Breazeal, Hae Won Park
Accurately modeling affect dynamics, which refers to the changes and fluctuations in emotions and affective displays during human conversations, is crucial for understanding human interactions.
no code implementations • 19 Apr 2023 • Dong Won Lee, Yubin Kim, Rosalind Picard, Cynthia Breazeal, Hae Won Park
As we move closer to real-world AI systems, AI agents must be able to deal with multiparty (group) conversations.
no code implementations • 28 Dec 2022 • Yubin Kim, Huili Chen, Sharifa Alghowinem, Cynthia Breazeal, Hae Won Park
This work serves as the first step toward fully unlocking the potential of end-to-end video understanding models pre-trained on large public datasets and augmented with data augmentation and visualization techniques for affect recognition in the multi-person human-robot interaction in the wild.
no code implementations • 10 Jun 2022 • Siyu Liu, Catherine Lu, Sharifa Alghowinem, Lea Gotoh, Cynthia Breazeal, Hae Won Park
The prevalence of suicide has been on the rise since the 20th century, causing severe emotional damage to individuals, families, and communities alike.
no code implementations • 20 Aug 2020 • Huili Chen, Yue Zhang, Felix Weninger, Rosalind Picard, Cynthia Breazeal, Hae Won Park
Automatic speech-based affect recognition of individuals in dyadic conversation is a challenging task, in part because of its heavy reliance on manual pre-processing.