1 code implementation • 23 Apr 2022 • Xiang Wang, Yingxin Wu, An Zhang, Fuli Feng, Xiangnan He, Tat-Seng Chua
Such reward accounts for the dependency of the newly-added edge and the previously-added edges, thus reflecting whether they collaborate together and form a coalition to pursue better explanations.
1 code implementation • ICLR 2022 • Ying-Xin Wu, Xiang Wang, An Zhang, Xiangnan He, Tat-Seng Chua
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input graph's features -- rationale -- which guides the model prediction.
no code implementations • 21 Jan 2022 • Ying-Xin Wu, Xiang Wang, An Zhang, Xia Hu, Fuli Feng, Xiangnan He, Tat-Seng Chua
In this work, we propose Deconfounded Subgraph Evaluation (DSE) which assesses the causal effect of an explanatory subgraph on the model prediction.
1 code implementation • NeurIPS 2021 • Xiang Wang, Yingxin Wu, An Zhang, Xiangnan He, Tat-Seng Chua
A performant paradigm towards multi-grained explainability is until-now lacking and thus a focus of our work.
no code implementations • 12 Apr 2021 • An Zhang, Xiang Wang, Chengfang Fang, Jie Shi, Tat-Seng Chua, Zehua Chen
Gradient-based attribution methods can aid in the understanding of convolutional neural networks (CNNs).
no code implementations • 1 Jan 2021 • Xiang Wang, Yingxin Wu, An Zhang, Xiangnan He, Tat-Seng Chua
In this work, we focus on the causal interpretability in GNNs and propose a method, Causal Screening, from the perspective of cause-effect.
2 code implementations • 3 Jul 2020 • Xiang Wang, Hongye Jin, An Zhang, Xiangnan He, Tong Xu, Tat-Seng Chua
Such uniform approach to model user interests easily results in suboptimal representations, failing to model diverse relationships and disentangle user intents in representations.