1 code implementation • 23 Dec 2023 • Tong Li, Jiale Deng, Yanyan Shen, Luyu Qiu, Yongxiang Huang, Caleb Chen Cao
Heterogeneous graph neural networks (HGNs) are prominent approaches to node classification tasks on heterogeneous graphs.
1 code implementation • 10 Jun 2023 • Weiyan Xie, Xiao-Hui Li, Zhi Lin, Leonard K. M. Poon, Caleb Chen Cao, Nevin L. Zhang
The need to explain the output of a deep neural network classifier is now widely recognized.
no code implementations • 20 May 2023 • Jindi Zhang, Luning Wang, Dan Su, Yongxiang Huang, Caleb Chen Cao, Lei Chen
Machine learning systems produce biased results towards certain demographic groups, known as the fairness problem.
1 code implementation • 13 May 2023 • Han Gao, Kaican Li, Weiyan Xie, Zhi Lin, Yongxiang Huang, Luning Wang, Caleb Chen Cao, Nevin L. Zhang
In this paper, we consider a third, lesser-known setting where a training domain is endowed with a collection of pairs of examples that share the same semantic information.
no code implementations • 21 Dec 2022 • Rusheng Pan, Zhiyong Wang, Yating Wei, Han Gao, Gongchang Ou, Caleb Chen Cao, Jingli Xu, Tong Xu, Wei Chen
A computational graph in a deep neural network (DNN) denotes a specific data flow diagram (DFD) composed of many tensors and operators.
1 code implementation • 6 Nov 2022 • Weiyan Xie, Xiao-Hui Li, Caleb Chen Cao, Nevin L. Zhang
Despite the popularity of Vision Transformers (ViTs) and eXplainable AI (XAI), only a few explanation methods have been designed specially for ViTs thus far.
2 code implementations • International Conference on Data Engineering 2022 • Shendi Wang, Haoyang Li, Caleb Chen Cao, Xiao-Hui Li, Ng Ngai Fai, Jianxin Liu, Xun Xue, Hu Song, Jinyu Li, Guangye Gu, Lei Chen
Recently, neural networks based models have been widely used for recommender systems (RS).
1 code implementation • 16 Mar 2022 • Nevin L. Zhang, Weiyan Xie, Zhi Lin, Guanfang Dong, Xiao-Hui Li, Caleb Chen Cao, Yunpeng Wang
Some examples are easier for humans to classify than others.
no code implementations • Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining 2021 • Xiao-Hui Li, Yuhan Shi, Haoyang Li, Wei Bai, Caleb Chen Cao, Lei Chen
It has been long debated that eXplainable AI (XAI) is an important technology for model and data exploration, validation, and debugging.
no code implementations • 8 Aug 2021 • Cong Wang, Haocheng Han, Caleb Chen Cao
Explanation of AI, as well as fairness of algorithms' decisions and the transparency of the decision model, are becoming more and more important.
no code implementations • 27 Jul 2021 • Luyu Qiu, Yi Yang, Caleb Chen Cao, Jing Liu, Yueyuan Zheng, Hilary Hei Ting Ngai, Janet Hsiao, Lei Chen
Besides, our solution also resolves a fundamental problem with the faithfulness indicator, a commonly used evaluation metric of XAI algorithms that appears to be sensitive to the OoD issue.
Explainable artificial intelligence
Explainable Artificial Intelligence (XAI)
no code implementations • 31 Dec 2020 • Xiao-Hui Li, Yuhan Shi, Haoyang Li, Wei Bai, Yuanwei Song, Caleb Chen Cao, Lei Chen
It has been long debated that eXplainable AI (XAI) is an important topic, but it lacks rigorous definition and fair metrics.