Search Results for author: Brian Y. Lim

Found 12 papers, 0 papers with code

Incremental XAI: Memorable Understanding of AI with Incremental Explanations

no code implementations10 Apr 2024 Jessica Y. Bo, Pan Hao, Brian Y. Lim

Focusing on linear factor explanations (factors $\times$ values = outcome), we introduce Incremental XAI to automatically partition explanations for general and atypical instances by providing Base + Incremental factors to help users read and remember more faithful explanations.

IRIS: Interpretable Rubric-Informed Segmentation for Action Quality Assessment

no code implementations16 Mar 2023 Hitoshi Matsuyama, Nobuo Kawaguchi, Brian Y. Lim

Indeed, to account for their decisions, instead of scoring subjectively, sports judges use a consistent set of criteria - rubric - on multiple actions in each performance sequence.

Action Quality Assessment

RePrompt: Automatic Prompt Editing to Refine AI-Generative Art Towards Precise Expressions

no code implementations19 Feb 2023 Yunlong Wang, Shuyuan Shen, Brian Y. Lim

With model explanations of the proxy model, we curated a rubric to adjust text prompts to optimize image generation for precise emotion expression.

Image Generation

Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning

no code implementations30 Jan 2022 Wencan Zhang, Mariella Dimiccoli, Brian Y. Lim

We present Debiased-CAM to recover explanation faithfulness across various bias types and levels by training a multi-input, multi-task model with auxiliary tasks for explanation and bias level predictions.

Towards Relatable Explainable AI with the Perceptual Process

no code implementations28 Dec 2021 Wencan Zhang, Brian Y. Lim

Inspired by the perceptual process from cognitive psychology, we propose the XAI Perceptual Processing Framework and RexNet model for relatable explainable AI with Contrastive Saliency, Counterfactual Synthetic, and Contrastive Cues explanations.

counterfactual Emotion Recognition +1

Interpretable Directed Diversity: Leveraging Model Explanations for Iterative Crowd Ideation

no code implementations21 Sep 2021 Yunlong Wang, Priyadarshini Venkatesh, Brian Y. Lim

We propose Interpretable Directed Diversity to automatically predict ideation quality and diversity scores, and provide AI explanations - Attribution, Contrastive Attribution, and Counterfactual Suggestions - to feedback on why ideations were scored (low), and how to get higher scores.

counterfactual

SalienTrack: providing salient information for semi-automated self-tracking feedback with model explanations

no code implementations21 Sep 2021 Yunlong Wang, Jiaying Liu, Homin Park, Jordan Schultz-McArdle, Stephanie Rosenthal, Judy Kay, Brian Y. Lim

Finally, we created interfaces to present salient information and conducted a formative user study to gain insights about how SalienTrack could be integrated into an interface for reflection.

Nutrition

Exploiting Explanations for Model Inversion Attacks

no code implementations ICCV 2021 Xuejun Zhao, Wencan Zhang, Xiaokui Xiao, Brian Y. Lim

We study this risk for image-based model inversion attacks and identified several attack architectures with increasing performance to reconstruct private image data from model explanations.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

Show or Suppress? Managing Input Uncertainty in Machine Learning Model Explanations

no code implementations23 Jan 2021 Danding Wang, Wencan Zhang, Brian Y. Lim

Feature attribution is widely used in interpretable machine learning to explain how influential each measured input feature value is for an output inference.

BIG-bench Machine Learning Interpretable Machine Learning

Debiased-CAM to mitigate image perturbations with faithful visual explanations of machine learning

no code implementations10 Dec 2020 Wencan Zhang, Mariella Dimiccoli, Brian Y. Lim

We present Debiased-CAM to recover explanation faithfulness across various bias types and levels by training a multi-input, multi-task model with auxiliary tasks for explanation and bias level predictions.

BIG-bench Machine Learning Self-Supervised Learning

NaMemo: Enhancing Lecturers' Interpersonal Competence of Remembering Students' Names

no code implementations21 Nov 2019 Guang Jiang, Mengzhen Shi, Ying Su, Pengcheng An, Brian Y. Lim, Yunlong Wang

Addressing students by their names helps a teacher to start building rapport with students and thus facilitates their classroom participation.

Face Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.