Search Results for author: Hae Won Park

Found 13 papers, 4 papers with code

MRF-Chat: Improving Dialogue with Markov Random Fields

no code implementations EMNLP 2021 Ishaan Grover, Matthew Huggins, Cynthia Breazeal, Hae Won Park

Recent state-of-the-art approaches in open-domain dialogue include training end-to-end deep-learning models to learn various conversational features like emotional content of response, symbolic transitions of dialogue contexts in a knowledge graph and persona of the agent and the user, among others.

Retrieval

A Demonstration of Adaptive Collaboration of Large Language Models for Medical Decision-Making

1 code implementation31 Oct 2024 Yubin Kim, Chanwoo Park, Hyewon Jeong, Cristina Grau-Vilchez, Yik Siu Chan, Xuhai Xu, Daniel McDuff, Hyeonhoon Lee, Cynthia Breazeal, Hae Won Park

Medical Decision-Making (MDM) is a multi-faceted process that requires clinicians to assess complex multi-modal patient data patient, often collaboratively.

Decision Making

HEART-felt Narratives: Tracing Empathy and Narrative Style in Personal Stories with LLMs

1 code implementation27 May 2024 Jocelyn Shen, Joel Mire, Hae Won Park, Cynthia Breazeal, Maarten Sap

We introduce a novel, theory-based taxonomy, HEART (Human Empathy and Narrative Taxonomy) that delineates elements of narrative style that can lead to empathy with the narrator of a story.

EmpathicStories++: A Multimodal Dataset for Empathy towards Personal Experiences

no code implementations24 May 2024 Jocelyn Shen, Yubin Kim, Mohit Hulse, Wazeer Zulfikar, Sharifa Alghowinem, Cynthia Breazeal, Hae Won Park

Modeling empathy is a complex endeavor that is rooted in interpersonal and experiential dimensions of human interaction, and remains an open problem within AI.

AI Agent

MDAgents: An Adaptive Collaboration of LLMs for Medical Decision-Making

1 code implementation22 Apr 2024 Yubin Kim, Chanwoo Park, Hyewon Jeong, Yik Siu Chan, Xuhai Xu, Daniel McDuff, Hyeonhoon Lee, Marzyeh Ghassemi, Cynthia Breazeal, Hae Won Park

MDAgents achieved the best performance in seven out of ten benchmarks on tasks requiring an understanding of medical knowledge and multi-modal reasoning, showing a significant improvement of up to 4. 2% (p < 0. 05) compared to previous methods' best performances.

Decision Making Medical Diagnosis +1

Improving Dialogue Agents by Decomposing One Global Explicit Annotation with Local Implicit Multimodal Feedback

no code implementations17 Mar 2024 Dong Won Lee, Hae Won Park, Yoon Kim, Cynthia Breazeal, Louis-Philippe Morency

We describe an approach for aligning an LLM-based dialogue agent based on global (i. e., dialogue-level) rewards, while also taking into account naturally-occurring multimodal signals.

Health-LLM: Large Language Models for Health Prediction via Wearable Sensor Data

1 code implementation12 Jan 2024 Yubin Kim, Xuhai Xu, Daniel McDuff, Cynthia Breazeal, Hae Won Park

Notably, we observe that our context enhancement can yield up to 23. 8% improvement in performance.

Modeling Empathic Similarity in Personal Narratives

no code implementations23 May 2023 Jocelyn Shen, Maarten Sap, Pedro Colon-Hernandez, Hae Won Park, Cynthia Breazeal

The most meaningful connections between people are often fostered through expression of shared vulnerability and emotional experiences in personal narratives.

Retrieval Semantic Similarity +1

HIINT: Historical, Intra- and Inter- personal Dynamics Modeling with Cross-person Memory Transformer

no code implementations21 May 2023 Yubin Kim, Dong Won Lee, Paul Pu Liang, Sharifa Algohwinem, Cynthia Breazeal, Hae Won Park

Accurately modeling affect dynamics, which refers to the changes and fluctuations in emotions and affective displays during human conversations, is crucial for understanding human interactions.

Language Modelling Large Language Model

Multipar-T: Multiparty-Transformer for Capturing Contingent Behaviors in Group Conversations

no code implementations19 Apr 2023 Dong Won Lee, Yubin Kim, Rosalind Picard, Cynthia Breazeal, Hae Won Park

As we move closer to real-world AI systems, AI agents must be able to deal with multiparty (group) conversations.

Joint Engagement Classification using Video Augmentation Techniques for Multi-person Human-robot Interaction

no code implementations28 Dec 2022 Yubin Kim, Huili Chen, Sharifa Alghowinem, Cynthia Breazeal, Hae Won Park

This work serves as the first step toward fully unlocking the potential of end-to-end video understanding models pre-trained on large public datasets and augmented with data augmentation and visualization techniques for affect recognition in the multi-person human-robot interaction in the wild.

Data Augmentation Face Swapping +1

Explainable AI for Suicide Risk Assessment Using Eye Activities and Head Gestures

no code implementations10 Jun 2022 Siyu Liu, Catherine Lu, Sharifa Alghowinem, Lea Gotoh, Cynthia Breazeal, Hae Won Park

The prevalence of suicide has been on the rise since the 20th century, causing severe emotional damage to individuals, families, and communities alike.

feature selection

Dyadic Speech-based Affect Recognition using DAMI-P2C Parent-child Multimodal Interaction Dataset

no code implementations20 Aug 2020 Huili Chen, Yue Zhang, Felix Weninger, Rosalind Picard, Cynthia Breazeal, Hae Won Park

Automatic speech-based affect recognition of individuals in dyadic conversation is a challenging task, in part because of its heavy reliance on manual pre-processing.

Cannot find the paper you are looking for? You can Submit a new open access paper.