no code implementations • 10 Apr 2024 • Jessica Y. Bo, Pan Hao, Brian Y. Lim
Focusing on linear factor explanations (factors $\times$ values = outcome), we introduce Incremental XAI to automatically partition explanations for general and atypical instances by providing Base + Incremental factors to help users read and remember more faithful explanations.
no code implementations • 16 Mar 2023 • Hitoshi Matsuyama, Nobuo Kawaguchi, Brian Y. Lim
Indeed, to account for their decisions, instead of scoring subjectively, sports judges use a consistent set of criteria - rubric - on multiple actions in each performance sequence.
no code implementations • 19 Feb 2023 • Yunlong Wang, Shuyuan Shen, Brian Y. Lim
With model explanations of the proxy model, we curated a rubric to adjust text prompts to optimize image generation for precise emotion expression.
no code implementations • 2 Feb 2023 • Brian Y. Lim, Joseph P. Cahaly, Chester Y. F. Sng, Adam Chew
Many visualizations have been developed for explainable AI (XAI), but they often require further reasoning by users to interpret.
no code implementations • 30 Jan 2022 • Wencan Zhang, Mariella Dimiccoli, Brian Y. Lim
We present Debiased-CAM to recover explanation faithfulness across various bias types and levels by training a multi-input, multi-task model with auxiliary tasks for explanation and bias level predictions.
no code implementations • 28 Dec 2021 • Wencan Zhang, Brian Y. Lim
Inspired by the perceptual process from cognitive psychology, we propose the XAI Perceptual Processing Framework and RexNet model for relatable explainable AI with Contrastive Saliency, Counterfactual Synthetic, and Contrastive Cues explanations.
no code implementations • 21 Sep 2021 • Yunlong Wang, Priyadarshini Venkatesh, Brian Y. Lim
We propose Interpretable Directed Diversity to automatically predict ideation quality and diversity scores, and provide AI explanations - Attribution, Contrastive Attribution, and Counterfactual Suggestions - to feedback on why ideations were scored (low), and how to get higher scores.
no code implementations • 21 Sep 2021 • Yunlong Wang, Jiaying Liu, Homin Park, Jordan Schultz-McArdle, Stephanie Rosenthal, Judy Kay, Brian Y. Lim
Finally, we created interfaces to present salient information and conducted a formative user study to gain insights about how SalienTrack could be integrated into an interface for reflection.
no code implementations • ICCV 2021 • Xuejun Zhao, Wencan Zhang, Xiaokui Xiao, Brian Y. Lim
We study this risk for image-based model inversion attacks and identified several attack architectures with increasing performance to reconstruct private image data from model explanations.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 23 Jan 2021 • Danding Wang, Wencan Zhang, Brian Y. Lim
Feature attribution is widely used in interpretable machine learning to explain how influential each measured input feature value is for an output inference.
no code implementations • 10 Dec 2020 • Wencan Zhang, Mariella Dimiccoli, Brian Y. Lim
We present Debiased-CAM to recover explanation faithfulness across various bias types and levels by training a multi-input, multi-task model with auxiliary tasks for explanation and bias level predictions.
no code implementations • 21 Nov 2019 • Guang Jiang, Mengzhen Shi, Ying Su, Pengcheng An, Brian Y. Lim, Yunlong Wang
Addressing students by their names helps a teacher to start building rapport with students and thus facilitates their classroom participation.