Search Results for author: Donghoon Shin

Found 7 papers, 3 papers with code

From Paper to Card: Transforming Design Implications with Generative AI

no code implementations12 Mar 2024 Donghoon Shin, Lucy Lu Wang, Gary Hsieh

Communicating design implications is common within the HCI community when publishing academic papers, yet these papers are rarely read and used by designers.

AI-Assisted Causal Pathway Diagram for Human-Centered Design

1 code implementation12 Mar 2024 Ruican Zhong, Donghoon Shin, Rosemary Meza, Predrag Klasnja, Lucas Colusso, Gary Hsieh

This paper explores the integration of causal pathway diagrams (CPD) into human-centered design (HCD), investigating how these diagrams can enhance the early stages of the design process.

PlanFitting: Tailoring Personalized Exercise Plans with Large Language Models

no code implementations22 Sep 2023 Donghoon Shin, Gary Hsieh, Young-Ho Kim

A personally tailored exercise regimen is crucial to ensuring sufficient physical activities, yet challenging to create as people have complex schedules and considerations and the creation of plans often requires iterations with experts.

Exploring the Effects of AI-assisted Emotional Support Processes in Online Mental Health Community

no code implementations21 Feb 2022 Donghoon Shin, Subeen Park, Esther Hehsun Kim, SooMin Kim, Jinwook Seo, Hwajung Hong

Social support in online mental health communities (OMHCs) is an effective and accessible way of managing mental wellbeing.

Call for Customized Conversation: Customized Conversation Grounding Persona and Knowledge

2 code implementations16 Dec 2021 Yoonna Jang, Jungwoo Lim, Yuna Hur, Dongsuk Oh, Suhyune Son, Yeonsoo Lee, Donghoon Shin, Seungryong Kim, Heuiseok Lim

Humans usually have conversations by making use of prior knowledge about a topic and background information of the people whom they are talking to.

Characterizing Human Explanation Strategies to Inform the Design of Explainable AI for Building Damage Assessment

no code implementations4 Nov 2021 Donghoon Shin, Sachin Grover, Kenneth Holstein, Adam Perer

Explainable AI (XAI) is a promising means of supporting human-AI collaborations for high-stakes visual detection tasks, such as damage detection tasks from satellite imageries, as fully-automated approaches are unlikely to be perfectly safe and reliable.

Explainable Artificial Intelligence (XAI)

Cannot find the paper you are looking for? You can Submit a new open access paper.