Search Results for author: Wencan Zhang

Found 6 papers, 0 papers with code

i-Rebalance: Personalized Vehicle Repositioning for Supply Demand Balance

no code implementations9 Jan 2024 Haoyang Chen, Peiyan Sun, Qiyuan Song, Wanyuan Wang, Weiwei Wu, Wencan Zhang, Guanyu Gao, Yan Lyu

To optimize supply-demand balance and enhance preference satisfaction simultaneously, i-Rebalance has a sequential reposition strategy with dual DRL agents: Grid Agent to determine the reposition order of idle vehicles, and Vehicle Agent to provide personalized recommendations to each vehicle in the pre-defined order.

Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning

no code implementations30 Jan 2022 Wencan Zhang, Mariella Dimiccoli, Brian Y. Lim

We present Debiased-CAM to recover explanation faithfulness across various bias types and levels by training a multi-input, multi-task model with auxiliary tasks for explanation and bias level predictions.

Towards Relatable Explainable AI with the Perceptual Process

no code implementations28 Dec 2021 Wencan Zhang, Brian Y. Lim

Inspired by the perceptual process from cognitive psychology, we propose the XAI Perceptual Processing Framework and RexNet model for relatable explainable AI with Contrastive Saliency, Counterfactual Synthetic, and Contrastive Cues explanations.

counterfactual Emotion Recognition +1

Exploiting Explanations for Model Inversion Attacks

no code implementations ICCV 2021 Xuejun Zhao, Wencan Zhang, Xiaokui Xiao, Brian Y. Lim

We study this risk for image-based model inversion attacks and identified several attack architectures with increasing performance to reconstruct private image data from model explanations.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

Show or Suppress? Managing Input Uncertainty in Machine Learning Model Explanations

no code implementations23 Jan 2021 Danding Wang, Wencan Zhang, Brian Y. Lim

Feature attribution is widely used in interpretable machine learning to explain how influential each measured input feature value is for an output inference.

BIG-bench Machine Learning Interpretable Machine Learning

Debiased-CAM to mitigate image perturbations with faithful visual explanations of machine learning

no code implementations10 Dec 2020 Wencan Zhang, Mariella Dimiccoli, Brian Y. Lim

We present Debiased-CAM to recover explanation faithfulness across various bias types and levels by training a multi-input, multi-task model with auxiliary tasks for explanation and bias level predictions.

BIG-bench Machine Learning Self-Supervised Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.