no code implementations • 4 May 2022 • Jie Ren, Mingjie Li, Meng Zhou, Shih-Han Chan, Quanshi Zhang
Based on the proposed metrics, we analyze two typical phenomena of the change of the transformation complexity during the training process, and explore the ceiling of a DNN's complexity.
no code implementations • 26 Mar 2022 • Sha Yuan, Hanyu Zhao, Shuai Zhao, Jiahong Leng, Yangxiao Liang, Xiaozhi Wang, Jifan Yu, Xin Lv, Zhou Shao, Jiaao He, Yankai Lin, Xu Han, Zhenghao Liu, Ning Ding, Yongming Rao, Yizhao Gao, Liang Zhang, Ming Ding, Cong Fang, Yisen Wang, Mingsheng Long, Jing Zhang, Yinpeng Dong, Tianyu Pang, Peng Cui, Lingxiao Huang, Zheng Liang, HuaWei Shen, HUI ZHANG, Quanshi Zhang, Qingxiu Dong, Zhixing Tan, Mingxuan Wang, Shuo Wang, Long Zhou, Haoran Li, Junwei Bao, Yingwei Pan, Weinan Zhang, Zhou Yu, Rui Yan, Chence Shi, Minghao Xu, Zuobai Zhang, Guoqiang Wang, Xiang Pan, Mengjie Li, Xiaoyu Chu, Zijun Yao, Fangwei Zhu, Shulin Cao, Weicheng Xue, Zixuan Ma, Zhengyan Zhang, Shengding Hu, Yujia Qin, Chaojun Xiao, Zheni Zeng, Ganqu Cui, Weize Chen, Weilin Zhao, Yuan YAO, Peng Li, Wenzhao Zheng, Wenliang Zhao, Ziyi Wang, Borui Zhang, Nanyi Fei, Anwen Hu, Zenan Ling, Haoyang Li, Boxi Cao, Xianpei Han, Weidong Zhan, Baobao Chang, Hao Sun, Jiawen Deng, Chujie Zheng, Juanzi Li, Lei Hou, Xigang Cao, Jidong Zhai, Zhiyuan Liu, Maosong Sun, Jiwen Lu, Zhiwu Lu, Qin Jin, Ruihua Song, Ji-Rong Wen, Zhouchen Lin, LiWei Wang, Hang Su, Jun Zhu, Zhifang Sui, Jiajun Zhang, Yang Liu, Xiaodong He, Minlie Huang, Jian Tang, Jie Tang
With the rapid development of deep learning, training Big Models (BMs) for multiple downstream tasks becomes a popular paradigm.
no code implementations • 2 Dec 2021 • Dongrui Liu, Shaobo Wang, Jie Ren, Kangrui Wang, Sheng Yin, Quanshi Zhang
We explain such a two-phase phenomenon in terms of the learning dynamics of the MLP.
1 code implementation • NeurIPS 2021 • Jie Ren, Die Zhang, Yisen Wang, Lu Chen, Zhanpeng Zhou, Yiting Chen, Xu Cheng, Xin Wang, Meng Zhou, Jie Shi, Quanshi Zhang
This paper provides a unified view to explain different adversarial attacks and defense methods, i. e. the view of multi-order interactions between input variables of DNNs.
no code implementations • 11 Nov 2021 • Jie Ren, Mingjie Li, Qirui Chen, Huiqi Deng, Quanshi Zhang
This paper proposes a hierarchical and symbolic And-Or graph (AOG) to objectively explain the internal logic encoded by a well-trained deep model for inference.
no code implementations • ICLR 2022 • Huiqi Deng, Qihan Ren, Hao Zhang, Quanshi Zhang
This paper explores the bottleneck of feature representations of deep neural networks (DNNs), from the perspective of the complexity of interactions between input variables encoded in DNNs.
1 code implementation • 5 Nov 2021 • Jie Ren, Die Zhang, Yisen Wang, Lu Chen, Zhanpeng Zhou, Yiting Chen, Xu Cheng, Xin Wang, Meng Zhou, Jie Shi, Quanshi Zhang
This paper provides a unified view to explain different adversarial attacks and defense methods, \emph{i. e.} the view of multi-order interactions between input variables of DNNs.
no code implementations • NeurIPS 2021 • Mingjie Li, Shaobo Wang, Quanshi Zhang
This paper proposes a method to visualize the discrimination power of intermediate-layer visual patterns encoded by a DNN.
no code implementations • NeurIPS 2021 • Wen Shen, Qihan Ren, Dongrui Liu, Quanshi Zhang
In this paper, we evaluate the quality of knowledge representations encoded in deep neural networks (DNNs) for 3D point cloud processing.
no code implementations • 29 Sep 2021 • Xu Cheng, Xin Wang, Haotian Xue, Zhengyang Liang, Xin Jin, Quanshi Zhang
This paper proposes a hypothesis to analyze the underlying reason for the cognitive difficulty of an image from two perspectives, i. e. a cognitive image usually makes a DNN strongly activated by cognitive concepts; discarding massive non-cognitive concepts may also help the DNN focus on cognitive concepts.
no code implementations • 29 Sep 2021 • Lu Chen, Renjie Chen, Hang Guo, Yuan Luo, Quanshi Zhang, Yisen Wang
Adversarial examples have attracted significant attention over the years, yet a sufficient understanding is in lack, especially when analyzing their performances in combination with adversarial training.
no code implementations • 29 Sep 2021 • Jie Ren, Zhanpeng Zhou, Qirui Chen, Quanshi Zhang
In the computation of Shapley values, people usually set an input variable to its baseline value to represent the absence of this variable.
1 code implementation • 22 Sep 2021 • Yuxiang Wu, Shang Wu, Xin Wang, Chengtian Lang, Quanshi Zhang, Quan Wen, Tianqi Xu
Second, 2-dimensional neuronal regions are fused into 3-dimensional neuron entities.
no code implementations • ICCV 2021 • Xin Wang, Shuyun Lin, Hao Zhang, Yufei Zhu, Quanshi Zhang
This paper aims to explain adversarial attacks in terms of how adversarial perturbations contribute to the attacking task.
no code implementations • 31 Jul 2021 • Xu Cheng, Xin Wang, Haotian Xue, Zhengyang Liang, Quanshi Zhang
This paper proposes a hypothesis for the aesthetic appreciation that aesthetic images make a neural network strengthen salient concepts and discard inessential concepts.
no code implementations • 16 Jul 2021 • Quanshi Zhang, Tian Han, Lixin Fan, Zhanxing Zhu, Hang Su, Ying Nian Wu, Jie Ren, Hao Zhang
This workshop pays a special interest in theoretic foundations, limitations, and new application trends in the scope of XAI.
1 code implementation • 9 Jul 2021 • Wen Shen, Zhihua Wei, Shikun Huang, BinBin Zhang, Jiaqi Fan, Ping Zhao, Quanshi Zhang
The reasonable definition of semantic interpretability presents the core challenge in explainable AI.
no code implementations • 21 Jun 2021 • Xu Cheng, Chuntung Chu, Yi Zheng, Jie Ren, Quanshi Zhang
In this paper, we rethink how a DNN encodes visual concepts of different complexities from a new perspective, i. e. the game-theoretic multi-order interactions between pixels in an image.
no code implementations • 22 May 2021 • Jie Ren, Zhanpeng Zhou, Qirui Chen, Quanshi Zhang
In this paper, we revisit the feature representation of a deep model from the perspective of game theory, and define the multi-variate interaction patterns of input variables to define the no-signal state of an input variable.
1 code implementation • 12 Mar 2021 • Jie Ren, Die Zhang, Yisen Wang, Lu Chen, Zhanpeng Zhou, Yiting Chen, Xu Cheng, Xin Wang, Meng Zhou, Jie Shi, Quanshi Zhang
This paper provides a unified view to explain different adversarial attacks and defense methods, i. e. the view of multi-order interactions between input variables of DNNs.
no code implementations • ICLR 2021 • Hao Zhang, Sen Li, Yinchao Ma, Mingjie Li, Yichen Xie, Quanshi Zhang
Experimental results on various DNNs and datasets have shown that the interaction loss can effectively improve the utility of dropout and boost the performance of DNNs.
no code implementations • 1 Jan 2021 • Jie Ren, Mingjie Li, Zexu Liu, Quanshi Zhang
This paper aims to define, visualize, and analyze the feature complexity that is learned by a DNN.
no code implementations • ICLR 2021 • Xin Wang, Jie Ren, Shuyun Lin, Xiangming Zhu, Yisen Wang, Quanshi Zhang
We discover and prove the negative correlation between the adversarial transferability and the interaction inside adversarial perturbations.
no code implementations • 1 Jan 2021 • Jie Ren, Mingjie Li, Meng Zhou, Shih-Han Chan, Zexu Liu, Quanshi Zhang
Based on the proposed metrics, we analyze two typical phenomena of the change of the transformation complexity during the training process, and explore the ceiling of a DNN’s complexity.
no code implementations • 28 Oct 2020 • Hao Zhang, Xu Cheng, Yiting Chen, Quanshi Zhang
In this study, we define interaction components of different orders between two input variables based on game theory.
no code implementations • 10 Oct 2020 • Hao Zhang, Yichen Xie, Longjie Zheng, Die Zhang, Quanshi Zhang
In this paper, we define and quantify the significance of interactions among multiple input variables of the DNN.
1 code implementation • 8 Oct 2020 • Xin Wang, Jie Ren, Shuyun Lin, Xiangming Zhu, Yisen Wang, Quanshi Zhang
We discover and prove the negative correlation between the adversarial transferability and the interaction inside adversarial perturbations.
no code implementations • 24 Sep 2020 • Hao Zhang, Sen Li, Yinchao Ma, Mingjie Li, Yichen Xie, Quanshi Zhang
This paper aims to understand and improve the utility of the dropout operation from the perspective of game-theoretic interactions.
no code implementations • 11 Sep 2020 • Shufan Wang, Ningyi Liao, Liyao Xiang, Nanyang Ye, Quanshi Zhang
Through experiments on a variety of adversarial pruning methods, we find that weights sparsity will not hurt but improve robustness, where both weights inheritance from the lottery ticket and adversarial training improve model robustness in network pruning.
1 code implementation • 29 Jun 2020 • Jie Ren, Mingjie Li, Zexu Liu, Quanshi Zhang
This paper aims to define, quantify, and analyze the feature complexity that is learned by a DNN.
no code implementations • 29 Jun 2020 • Die Zhang, Huilin Zhou, Hao Zhang, Xiaoyi Bao, Da Huo, Ruizhao Chen, Xu Cheng, Mengyue Wu, Quanshi Zhang
This paper proposes a method to disentangle and quantify interactions among words that are encoded inside a DNN for natural language processing.
no code implementations • 21 Jun 2020 • Hao Zhang, Yiting Chen, Haotian Ma, Xu Cheng, Qihan Ren, Liyao Xiang, Jie Shi, Quanshi Zhang
Compared to the traditional neural network, the RENN uses d-ary vectors/tensors as features, in which each element is a d-ary number.
no code implementations • 18 Mar 2020 • Hao Zhang, Yi-Ting Chen, Liyao Xiang, Haotian Ma, Jie Shi, Quanshi Zhang
We propose a method to revise the neural network to construct the quaternion-valued neural network (QNN), in order to prevent intermediate-layer features from leaking input information.
no code implementations • CVPR 2020 • Xu Cheng, Zhefan Rao, Yilan Chen, Quanshi Zhang
Whereas, in the scenario of learning from raw data, the DNN learns visual concepts sequentially.
no code implementations • 18 Dec 2019 • Shuang Zhang, Liyao Xiang, CongCong Li, YiXuan Wang, Quanshi Zhang, Wei Wang, Bo Li
Powered by machine learning services in the cloud, numerous learning-driven mobile applications are gaining popularity in the market.
no code implementations • 20 Nov 2019 • Hao Zhang, Jiayi Chen, Haotian Xue, Quanshi Zhang
This paper proposes a set of criteria to evaluate the objectiveness of explanation methods of neural networks, which is crucial for the development of explainable AI, but it also presents significant challenges.
1 code implementation • CVPR 2021 • Wen Shen, Zhihua Wei, Shikun Huang, BinBin Zhang, Panyue Chen, Ping Zhao, Quanshi Zhang
In this paper, we diagnose deep neural networks for 3D point cloud processing to explore utilities of different intermediate-layer network architectures.
1 code implementation • ECCV 2020 • Wen Shen, BinBin Zhang, Shikun Huang, Zhihua Wei, Quanshi Zhang
This paper proposes a set of rules to revise various neural networks for 3D point cloud processing to rotation-equivariant quaternion neural networks (REQNNs).
no code implementations • ICLR 2020 • Ruofan Liang, Tianlin Li, Longfei Li, Jing Wang, Quanshi Zhang
As a generic tool, our method can be broadly used for different applications.
no code implementations • 10 Jun 2019 • Haotian Ma, Yinqing Zhang, Fan Zhou, Quanshi Zhang
This paper presents a method to explain how input information is discarded through intermediate layers of a neural network during the forward propagation, in order to quantify and diagnose knowledge representations of pre-trained deep neural networks.
1 code implementation • ICLR 2020 • Liyao Xiang, Haotian Ma, Hao Zhang, Yifan Zhang, Jie Ren, Quanshi Zhang
Previous studies have found that an adversary attacker can often infer unintended input information from intermediate-layer features.
no code implementations • 25 Jan 2019 • Quanshi Zhang, Lixin Fan, Bolei Zhou
This is the Proceedings of AAAI 2019 Workshop on Network Interpretability for Deep Learning
no code implementations • 21 Jan 2019 • Quanshi Zhang, Yu Yang, Qian Yu, Ying Nian Wu
This paper focuses on a new task, i. e., transplanting a category-and-task-specific neural network to a generic, modular network without strong supervision.
no code implementations • 21 Jan 2019 • Quanshi Zhang, Yu Yang, Ying Nian Wu
This paper presents an unsupervised method to learn a neural network, namely an explainer, to interpret a pre-trained convolutional neural network (CNN), i. e., the explainer uses interpretable visual concepts to explain features in middle conv-layers of a CNN.
no code implementations • 8 Jan 2019 • Zenan Ling, Haotian Ma, Yu Yang, Robert C. Qiu, Song-Chun Zhu, Quanshi Zhang
In this paper, we propose to disentangle and interpret contextual effects that are encoded in a pre-trained deep neural network.
no code implementations • 8 Jan 2019 • Quanshi Zhang, Xin Wang, Ying Nian Wu, Huilin Zhou, Song-Chun Zhu
This paper proposes a generic method to learn interpretable convolutional filters in a deep convolutional neural network (CNN) for object classification, where each interpretable filter encodes features of a specific object part.
no code implementations • 18 Dec 2018 • Quanshi Zhang, Xin Wang, Ruiming Cao, Ying Nian Wu, Feng Shi, Song-Chun Zhu
This paper introduces a graphical model, namely an explanatory graph, which reveals the knowledge hierarchy hidden inside conv-layers of a pre-trained CNN.
no code implementations • 18 Dec 2018 • Quanshi Zhang, Ruiming Cao, Ying Nian Wu, Song-Chun Zhu
The AOG associates each object part with certain neural units in feature maps of conv-layers.
no code implementations • ICCV 2019 • Runjin Chen, Hao Chen, Ge Huang, Jie Ren, Quanshi Zhang
This paper presents a method to explain the knowledge encoded in a convolutional neural network (CNN) quantitatively and semantically.
no code implementations • 18 May 2018 • Quanshi Zhang, Yu Yang, Yuchen Liu, Ying Nian Wu, Song-Chun Zhu
Given feature maps of a certain conv-layer of the CNN, the explainer performs like an auto-encoder, which first disentangles the feature maps into object-part features and then inverts object-part features back to features of higher conv-layers of the CNN.
no code implementations • 26 Apr 2018 • Quanshi Zhang, Yu Yang, Qian Yu, Ying Nian Wu
This paper focuses on a new task, i. e., transplanting a category-and-task-specific neural network to a generic, modular network without strong supervision.
1 code implementation • 2 Feb 2018 • Quanshi Zhang, Song-Chun Zhu
This paper reviews recent studies in understanding neural-network representations and learning neural networks with interpretable/disentangled middle-layer representations.
no code implementations • CVPR 2019 • Quanshi Zhang, Yu Yang, Haotian Ma, Ying Nian Wu
We propose to learn a decision tree, which clarifies the specific reason for each prediction made by the CNN at the semantic level.
no code implementations • 29 Oct 2017 • Quanshi Zhang, Wenguan Wang, Song-Chun Zhu
We aim to discover representation flaws caused by potential dataset bias.
2 code implementations • CVPR 2018 • Quanshi Zhang, Ying Nian Wu, Song-Chun Zhu
Instead, the interpretable CNN automatically assigns each filter in a high conv-layer with an object part during the learning process.
Ranked #1 on
single catogory classification
on ILSVRC Part
no code implementations • 13 Aug 2017 • Quanshi Zhang, Ying Nian Wu, Hao Zhang, Song-Chun Zhu
The loss is defined for nodes in all layers of the AOG, including the generative loss (measuring the likelihood of the images) and the discriminative loss (measuring the fitness to human answers).
no code implementations • 13 Aug 2017 • Quanshi Zhang, Xuan Song, Ryosuke Shibasaki
In this study, we formulate the concept of "mining maximal-size frequent subgraphs" in the challenging domain of visual data (images and videos).
no code implementations • 5 Aug 2017 • Quanshi Zhang, Ruiming Cao, Feng Shi, Ying Nian Wu, Song-Chun Zhu
Considering that each filter in a conv-layer of a pre-trained CNN usually represents a mixture of object parts, we propose a simple yet efficient method to automatically disentangles different part patterns from each filter, and construct an explanatory graph.
no code implementations • 5 Aug 2017 • Quanshi Zhang, Ruiming Cao, Shengming Zhang, Mark Redmonds, Ying Nian Wu, Song-Chun Zhu
In the scenario of one/multi-shot learning, conventional end-to-end learning strategies without sufficient supervision are usually not powerful enough to learn correct patterns from noisy signals.
no code implementations • CVPR 2017 • Quanshi Zhang, Ruiming Cao, Ying Nian Wu, Song-Chun Zhu
We use an active human-computer communication to incrementally grow such an AOG on the pre-trained CNN as follows.
no code implementations • 14 Nov 2016 • Quanshi Zhang, Ruiming Cao, Ying Nian Wu, Song-Chun Zhu
This paper proposes a learning strategy that extracts object-part concepts from a pre-trained convolutional neural network (CNN), in an attempt to 1) explore explicit semantics hidden in CNN units and 2) gradually grow a semantically interpretable graphical model on the pre-trained CNN for hierarchical object understanding.
no code implementations • ICCV 2015 • Quanshi Zhang, Ying Nian Wu, Song-Chun Zhu
This paper reformulates the theory of graph mining on the technical basis of graph matching, and extends its scope of applications to computer vision.
no code implementations • CVPR 2014 • Quanshi Zhang, Xuan Song, Xiaowei Shao, Huijing Zhao, Ryosuke Shibasaki
3D reconstruction from a single image is a classical problem in computer vision.
no code implementations • CVPR 2014 • Quanshi Zhang, Xuan Song, Xiaowei Shao, Huijing Zhao, Ryosuke Shibasaki
Graph matching and graph mining are two typical areas in artificial intelligence.
no code implementations • CVPR 2013 • Quanshi Zhang, Xuan Song, Xiaowei Shao, Ryosuke Shibasaki, Huijing Zhao
We design a graphical model that uses object edges to represent object structures, and this paper aims to incrementally learn this category model from one labeled object and a number of casually captured scenes.