Search Results for author: Yuyang Gao

Found 12 papers, 4 papers with code

DUE: Dynamic Uncertainty-Aware Explanation Supervision via 3D Imputation

no code implementations16 Mar 2024 Qilong Zhao, Yifei Zhang, Mengdan Zhu, Siyi Gu, Yuyang Gao, Xiaofeng Yang, Liang Zhao

Explanation supervision aims to enhance deep learning models by integrating additional signals to guide the generation of model explanations, showcasing notable improvements in both the predictability and explainability of the model.

Imputation

3DPFIX: Improving Remote Novices' 3D Printing Troubleshooting through Human-AI Collaboration

no code implementations29 Jan 2024 Nahyun Kwon, Tong Sun, Yuyang Gao, Liang Zhao, Xu Wang, Jeeeun Kim, Sungsoo Ray Hong

While troubleshooting plays an essential part of 3D printing, the process remains challenging for many remote novices even with the help of well-developed online sources, such as online troubleshooting archives and online community help.

A Comprehensive Empirical Study of Bugs in Open-Source Federated Learning Frameworks

no code implementations9 Aug 2023 Weijie Shao, Yuyang Gao, Fu Song, Sen Chen, Lingling Fan, JingZhu He

Federated learning (FL) is a distributed machine learning (ML) paradigm, allowing multiple clients to collaboratively train shared machine learning (ML) models without exposing clients' data privacy.

Federated Learning

Designing a Direct Feedback Loop between Humans and Convolutional Neural Networks through Local Explanations

1 code implementation8 Jul 2023 Tong Steven Sun, Yuyang Gao, Shubham Khaladkar, Sijia Liu, Liang Zhao, Young-Ho Kim, Sungsoo Ray Hong

To mitigate the gap, we designed DeepFuse, the first interactive design that realizes the direct feedback loop between a user and CNNs in diagnosing and revising CNN's vulnerability using local explanations.

Explainable Artificial Intelligence (XAI)

MAGI: Multi-Annotated Explanation-Guided Learning

no code implementations ICCV 2023 Yifei Zhang, Siyi Gu, Yuyang Gao, Bo Pan, Xiaofeng Yang, Liang Zhao

This technique aims to improve the predictability of the model by incorporating human understanding of the prediction process into the training phase.

Variational Inference

Saliency-Augmented Memory Completion for Continual Learning

1 code implementation26 Dec 2022 Guangji Bai, Chen Ling, Yuyang Gao, Liang Zhao

Specifically, we innovatively propose to store the part of the image most important to the tasks in episodic memory by saliency map extraction and memory encoding.

Bilevel Optimization Continual Learning +1

Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning

no code implementations7 Dec 2022 Yuyang Gao, Siyi Gu, Junji Jiang, Sungsoo Ray Hong, Dazhou Yu, Liang Zhao

As the societal impact of Deep Neural Networks (DNNs) grows, the goals for advancing DNNs become more complex and diverse, ranging from improving a conventional model accuracy metric to infusing advanced human virtues such as fairness, accountability, transparency (FaccT), and unbiasedness.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1

RES: A Robust Framework for Guiding Visual Explanation

1 code implementation27 Jun 2022 Yuyang Gao, Tong Steven Sun, Guangji Bai, Siyi Gu, Sungsoo Ray Hong, Liang Zhao

Despite the fast progress of explanation techniques in modern Deep Neural Networks (DNNs) where the main focus is handling "how to generate the explanations", advanced research questions that examine the quality of the explanation itself (e. g., "whether the explanations are accurate") and improve the explanation quality (e. g., "how to adjust the model to generate more accurate explanations when explanations are inaccurate") are still relatively under-explored.

Aligning Eyes between Humans and Deep Neural Network through Interactive Attention Alignment

1 code implementation6 Feb 2022 Yuyang Gao, Tong Sun, Liang Zhao, Sungsoo Hong

We propose a novel framework of Interactive Attention Alignment (IAA) that aims at realizing human-steerable Deep Neural Networks (DNNs).

Gender Classification

Schematic Memory Persistence and Transience for Efficient and Robust Continual Learning

no code implementations5 May 2021 Yuyang Gao, Giorgio A. Ascoli, Liang Zhao

However, since forgetting is inevitable given bounded memory and unbounded task loads, 'how to reasonably forget' is a problem continual learning must address in order to reduce the performance gap between AIs and humans, in terms of 1) memory efficiency, 2) generalizability, and 3) robustness when dealing with noisy data.

Continual Learning

DynGraph2Seq: Dynamic-Graph-to-Sequence Interpretable Learning for Health Stage Prediction in Online Health Forums

no code implementations22 Aug 2019 Yuyang Gao, Lingfei Wu, Houman Homayoun, Liang Zhao

In this paper, we first formulate the transition of user activities as a dynamic graph with multi-attributed nodes, then formalize the health stage inference task as a dynamic graph-to-sequence learning problem, and hence propose a novel dynamic graph-to-sequence neural networks architecture (DynGraph2Seq) to address all the challenges.

Graph-to-Sequence

Cannot find the paper you are looking for? You can Submit a new open access paper.