Search Results for author: Siyi Gu

Found 7 papers, 2 papers with code

Aligning Target-Aware Molecule Diffusion Models with Exact Energy Optimization

no code implementations1 Jul 2024 Siyi Gu, Minkai Xu, Alexander Powers, Weili Nie, Tomas Geffner, Karsten Kreis, Jure Leskovec, Arash Vahdat, Stefano Ermon

AliDiff shifts the target-conditioned chemical distribution towards regions with higher binding affinity and structural rationality, specified by user-defined reward functions, via the preference optimization approach.


DUE: Dynamic Uncertainty-Aware Explanation Supervision via 3D Imputation

no code implementations16 Mar 2024 Qilong Zhao, Yifei Zhang, Mengdan Zhu, Siyi Gu, Yuyang Gao, Xiaofeng Yang, Liang Zhao

Explanation supervision aims to enhance deep learning models by integrating additional signals to guide the generation of model explanations, showcasing notable improvements in both the predictability and explainability of the model.


XAI Benchmark for Visual Explanation

no code implementations12 Oct 2023 Yifei Zhang, Siyi Gu, James Song, Bo Pan, Guangji Bai, Liang Zhao

Our proposed benchmarks facilitate a fair evaluation and comparison of visual explanation methods.

Decision Making Explainable artificial intelligence +2

Visual Attention Prompted Prediction and Learning

1 code implementation12 Oct 2023 Yifei Zhang, Siyi Gu, Bo Pan, Guangji Bai, Meikang Qiu, Xiaofeng Yang, Liang Zhao

However, in many real-world situations, it is usually desired to prompt the model with visual attention without model retraining.

Decision Making

MAGI: Multi-Annotated Explanation-Guided Learning

no code implementations ICCV 2023 Yifei Zhang, Siyi Gu, Yuyang Gao, Bo Pan, Xiaofeng Yang, Liang Zhao

This technique aims to improve the predictability of the model by incorporating human understanding of the prediction process into the training phase.

Variational Inference

Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning

no code implementations7 Dec 2022 Yuyang Gao, Siyi Gu, Junji Jiang, Sungsoo Ray Hong, Dazhou Yu, Liang Zhao

As the societal impact of Deep Neural Networks (DNNs) grows, the goals for advancing DNNs become more complex and diverse, ranging from improving a conventional model accuracy metric to infusing advanced human virtues such as fairness, accountability, transparency (FaccT), and unbiasedness.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1

RES: A Robust Framework for Guiding Visual Explanation

1 code implementation27 Jun 2022 Yuyang Gao, Tong Steven Sun, Guangji Bai, Siyi Gu, Sungsoo Ray Hong, Liang Zhao

Despite the fast progress of explanation techniques in modern Deep Neural Networks (DNNs) where the main focus is handling "how to generate the explanations", advanced research questions that examine the quality of the explanation itself (e. g., "whether the explanations are accurate") and improve the explanation quality (e. g., "how to adjust the model to generate more accurate explanations when explanations are inaccurate") are still relatively under-explored.

Cannot find the paper you are looking for? You can Submit a new open access paper.