Search Results for author: Caleb Chen Cao

Found 12 papers, 5 papers with code

Towards Fine-Grained Explainability for Heterogeneous Graph Neural Network

1 code implementation23 Dec 2023 Tong Li, Jiale Deng, Yanyan Shen, Luyu Qiu, Yongxiang Huang, Caleb Chen Cao

Heterogeneous graph neural networks (HGNs) are prominent approaches to node classification tasks on heterogeneous graphs.

Node Classification

Model Debiasing via Gradient-based Explanation on Representation

no code implementations20 May 2023 Jindi Zhang, Luning Wang, Dan Su, Yongxiang Huang, Caleb Chen Cao, Lei Chen

Machine learning systems produce biased results towards certain demographic groups, known as the fairness problem.

Disentanglement Fairness

Towards Efficient Visual Simplification of Computational Graphs in Deep Neural Networks

no code implementations21 Dec 2022 Rusheng Pan, Zhiyong Wang, Yating Wei, Han Gao, Gongchang Ou, Caleb Chen Cao, Jingli Xu, Tong Xu, Wei Chen

A computational graph in a deep neural network (DNN) denotes a specific data flow diagram (DFD) composed of many tensors and operators.

ViT-CX: Causal Explanation of Vision Transformers

1 code implementation6 Nov 2022 Weiyan Xie, Xiao-Hui Li, Caleb Chen Cao, Nevin L. Zhang

Despite the popularity of Vision Transformers (ViTs) and eXplainable AI (XAI), only a few explanation methods have been designed specially for ViTs thus far.

Explainable Artificial Intelligence (XAI)

Example Perplexity

1 code implementation16 Mar 2022 Nevin L. Zhang, Weiyan Xie, Zhi Lin, Guanfang Dong, Xiao-Hui Li, Caleb Chen Cao, Yunpeng Wang

Some examples are easier for humans to classify than others.

TDLS: A Top-Down Layer Searching Algorithm for Generating Counterfactual Visual Explanation

no code implementations8 Aug 2021 Cong Wang, Haocheng Han, Caleb Chen Cao

Explanation of AI, as well as fairness of algorithms' decisions and the transparency of the decision model, are becoming more and more important.

counterfactual Counterfactual Explanation +2

Resisting Out-of-Distribution Data Problem in Perturbation of XAI

no code implementations27 Jul 2021 Luyu Qiu, Yi Yang, Caleb Chen Cao, Jing Liu, Yueyuan Zheng, Hilary Hei Ting Ngai, Janet Hsiao, Lei Chen

Besides, our solution also resolves a fundamental problem with the faithfulness indicator, a commonly used evaluation metric of XAI algorithms that appears to be sensitive to the OoD issue.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

Quantitative Evaluations on Saliency Methods: An Experimental Study

no code implementations31 Dec 2020 Xiao-Hui Li, Yuhan Shi, Haoyang Li, Wei Bai, Yuanwei Song, Caleb Chen Cao, Lei Chen

It has been long debated that eXplainable AI (XAI) is an important topic, but it lacks rigorous definition and fair metrics.

Explainable Artificial Intelligence (XAI)

Cannot find the paper you are looking for? You can Submit a new open access paper.