no code implementations • 23 Sep 2023 • Xinhao Zheng, Huiqi Deng, Bo Fan, Quanshi Zhang
This paper aims to develop a new attribution method to explain the conflict between individual variables' attributions and their coalition's attribution from a fully new perspective.
no code implementations • 17 Sep 2023 • Zirui He, Huiqi Deng, Haiyan Zhao, Ninghao Liu, Mengnan Du
Recent research has shown that large language models rely on spurious correlations in the data for natural language understanding (NLU) tasks.
Natural Language Understanding Out-of-Distribution Generalization
no code implementations • 2 Sep 2023 • Haiyan Zhao, Hanjie Chen, Fan Yang, Ninghao Liu, Huiqi Deng, Hengyi Cai, Shuaiqiang Wang, Dawei Yin, Mengnan Du
For each paradigm, we summarize the goals and dominant approaches for generating local explanations of individual predictions and global explanations of overall model knowledge.
no code implementations • 2 Mar 2023 • Huiqi Deng, Na Zou, Mengnan Du, Weifu Chen, Guocan Feng, Ziwei Yang, Zheyang Li, Quanshi Zhang
Various attribution methods have been developed to explain deep neural networks (DNNs) by inferring the attribution/importance/contribution score of each input variable to the final output.
1 code implementation • 25 Feb 2023 • Qihan Ren, Huiqi Deng, Yunuo Chen, Siyu Lou, Quanshi Zhang
In this paper, we focus on mean-field variational Bayesian Neural Networks (BNNs) and explore the representation capacity of such BNNs by investigating which types of concepts are less likely to be encoded by the BNN.
no code implementations • 25 Feb 2023 • Huilin Zhou, Hao Zhang, Huiqi Deng, Dongrui Liu, Wen Shen, Shih-Han Chan, Quanshi Zhang
Therefore, in this paper, we investigate the generalization power of each interactive concept, and we use the generalization power of different interactive concepts to explain the generalization power of the entire DNN.
no code implementations • 2 Dec 2021 • Dongrui Liu, Shaobo Wang, Jie Ren, Kangrui Wang, Sheng Yin, Huiqi Deng, Quanshi Zhang
In this paper, we focus on a typical two-phase phenomenon in the learning of multi-layer perceptrons (MLPs), and we aim to explain the reason for the decrease of feature diversity in the first phase.
1 code implementation • ICLR 2022 • Huiqi Deng, Qihan Ren, Hao Zhang, Quanshi Zhang
This paper explores the bottleneck of feature representations of deep neural networks (DNNs), from the perspective of the complexity of interactions between input variables encoded in DNNs.
1 code implementation • CVPR 2023 • Jie Ren, Mingjie Li, Qirui Chen, Huiqi Deng, Quanshi Zhang
This paper aims to illustrate the concept-emerging phenomenon in a trained DNN.
no code implementations • 28 May 2021 • Huiqi Deng, Na Zou, Mengnan Du, Weifu Chen, Guocan Feng, Xia Hu
However, the attribution problem has not been well-defined, which lacks a unified guideline to the contribution assignment process.
no code implementations • 14 Apr 2021 • Huiqi Deng, Na Zou, Weifu Chen, Guocan Feng, Mengnan Du, Xia Hu
The basic idea is to learn a source signal by back-propagation such that the mutual information between input and output should be as much as possible preserved in the mutual information between input and the source signal.
no code implementations • 21 Aug 2020 • Huiqi Deng, Na Zou, Mengnan Du, Weifu Chen, Guocan Feng, Xia Hu
Attribution methods have been developed to understand the decision-making process of machine learning models, especially deep neural networks, by assigning importance scores to individual features.