Search Results for author: Sungsoo Ray Hong

Found 5 papers, 3 papers with code

3DPFIX: Improving Remote Novices' 3D Printing Troubleshooting through Human-AI Collaboration

no code implementations29 Jan 2024 Nahyun Kwon, Tong Sun, Yuyang Gao, Liang Zhao, Xu Wang, Jeeeun Kim, Sungsoo Ray Hong

While troubleshooting plays an essential part of 3D printing, the process remains challenging for many remote novices even with the help of well-developed online sources, such as online troubleshooting archives and online community help.

Designing a Direct Feedback Loop between Humans and Convolutional Neural Networks through Local Explanations

1 code implementation8 Jul 2023 Tong Steven Sun, Yuyang Gao, Shubham Khaladkar, Sijia Liu, Liang Zhao, Young-Ho Kim, Sungsoo Ray Hong

To mitigate the gap, we designed DeepFuse, the first interactive design that realizes the direct feedback loop between a user and CNNs in diagnosing and revising CNN's vulnerability using local explanations.

Explainable Artificial Intelligence (XAI)

Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning

no code implementations7 Dec 2022 Yuyang Gao, Siyi Gu, Junji Jiang, Sungsoo Ray Hong, Dazhou Yu, Liang Zhao

As the societal impact of Deep Neural Networks (DNNs) grows, the goals for advancing DNNs become more complex and diverse, ranging from improving a conventional model accuracy metric to infusing advanced human virtues such as fairness, accountability, transparency (FaccT), and unbiasedness.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1

RES: A Robust Framework for Guiding Visual Explanation

1 code implementation27 Jun 2022 Yuyang Gao, Tong Steven Sun, Guangji Bai, Siyi Gu, Sungsoo Ray Hong, Liang Zhao

Despite the fast progress of explanation techniques in modern Deep Neural Networks (DNNs) where the main focus is handling "how to generate the explanations", advanced research questions that examine the quality of the explanation itself (e. g., "whether the explanations are accurate") and improve the explanation quality (e. g., "how to adjust the model to generate more accurate explanations when explanations are inaccurate") are still relatively under-explored.

Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs

1 code implementation23 Apr 2020 Sungsoo Ray Hong, Jessica Hullman, Enrico Bertini

As the use of machine learning (ML) models in product development and data-driven decision-making processes became pervasive in many domains, people's focus on building a well-performing model has increasingly shifted to understanding how their model works.

Decision Making

Cannot find the paper you are looking for? You can Submit a new open access paper.