Search Results for author: Quanshi Zhang

Found 65 papers, 12 papers with code

Towards Theoretical Analysis of Transformation Complexity of ReLU DNNs

no code implementations4 May 2022 Jie Ren, Mingjie Li, Meng Zhou, Shih-Han Chan, Quanshi Zhang

Based on the proposed metrics, we analyze two typical phenomena of the change of the transformation complexity during the training process, and explore the ceiling of a DNN's complexity.

Adversarial Robustness Disentanglement

Trap of Feature Diversity in the Learning of MLPs

no code implementations2 Dec 2021 Dongrui Liu, Shaobo Wang, Jie Ren, Kangrui Wang, Sheng Yin, Quanshi Zhang

We explain such a two-phase phenomenon in terms of the learning dynamics of the MLP.

Towards a Unified Game-Theoretic View of Adversarial Perturbations and Robustness

1 code implementation NeurIPS 2021 Jie Ren, Die Zhang, Yisen Wang, Lu Chen, Zhanpeng Zhou, Yiting Chen, Xu Cheng, Xin Wang, Meng Zhou, Jie Shi, Quanshi Zhang

This paper provides a unified view to explain different adversarial attacks and defense methods, i. e. the view of multi-order interactions between input variables of DNNs.

Adversarial Robustness

Towards Axiomatic, Hierarchical, and Symbolic Explanation for Deep Models

no code implementations11 Nov 2021 Jie Ren, Mingjie Li, Qirui Chen, Huiqi Deng, Quanshi Zhang

This paper proposes a hierarchical and symbolic And-Or graph (AOG) to objectively explain the internal logic encoded by a well-trained deep model for inference.

Discovering and Explaining the Representation Bottleneck of DNNs

no code implementations ICLR 2022 Huiqi Deng, Qihan Ren, Hao Zhang, Quanshi Zhang

This paper explores the bottleneck of feature representations of deep neural networks (DNNs), from the perspective of the complexity of interactions between input variables encoded in DNNs.

A Unified Game-Theoretic Interpretation of Adversarial Robustness

1 code implementation5 Nov 2021 Jie Ren, Die Zhang, Yisen Wang, Lu Chen, Zhanpeng Zhou, Yiting Chen, Xu Cheng, Xin Wang, Meng Zhou, Jie Shi, Quanshi Zhang

This paper provides a unified view to explain different adversarial attacks and defense methods, \emph{i. e.} the view of multi-order interactions between input variables of DNNs.

Adversarial Robustness

Visualizing the Emergence of Intermediate Visual Patterns in DNNs

no code implementations NeurIPS 2021 Mingjie Li, Shaobo Wang, Quanshi Zhang

This paper proposes a method to visualize the discrimination power of intermediate-layer visual patterns encoded by a DNN.

Knowledge Distillation

Interpreting Representation Quality of DNNs for 3D Point Cloud Processing

no code implementations NeurIPS 2021 Wen Shen, Qihan Ren, Dongrui Liu, Quanshi Zhang

In this paper, we evaluate the quality of knowledge representations encoded in deep neural networks (DNNs) for 3D point cloud processing.

Translation

A HYPOTHESIS FOR THE COGNITIVE DIFFICULTY OF IMAGES

no code implementations29 Sep 2021 Xu Cheng, Xin Wang, Haotian Xue, Zhengyang Liang, Xin Jin, Quanshi Zhang

This paper proposes a hypothesis to analyze the underlying reason for the cognitive difficulty of an image from two perspectives, i. e. a cognitive image usually makes a DNN strongly activated by cognitive concepts; discarding massive non-cognitive concepts may also help the DNN focus on cognitive concepts.

Dissecting Local Properties of Adversarial Examples

no code implementations29 Sep 2021 Lu Chen, Renjie Chen, Hang Guo, Yuan Luo, Quanshi Zhang, Yisen Wang

Adversarial examples have attracted significant attention over the years, yet a sufficient understanding is in lack, especially when analyzing their performances in combination with adversarial training.

Adversarial Robustness

Towards a Game-Theoretic View of Baseline Values in the Shapley Value

no code implementations29 Sep 2021 Jie Ren, Zhanpeng Zhou, Qirui Chen, Quanshi Zhang

In the computation of Shapley values, people usually set an input variable to its baseline value to represent the absence of this variable.

Interpreting Attributions and Interactions of Adversarial Attacks

no code implementations ICCV 2021 Xin Wang, Shuyun Lin, Hao Zhang, Yufei Zhu, Quanshi Zhang

This paper aims to explain adversarial attacks in terms of how adversarial perturbations contribute to the attacking task.

A Hypothesis for the Aesthetic Appreciation in Neural Networks

no code implementations31 Jul 2021 Xu Cheng, Xin Wang, Haotian Xue, Zhengyang Liang, Quanshi Zhang

This paper proposes a hypothesis for the aesthetic appreciation that aesthetic images make a neural network strengthen salient concepts and discard inessential concepts.

Proceedings of ICML 2021 Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI

no code implementations16 Jul 2021 Quanshi Zhang, Tian Han, Lixin Fan, Zhanxing Zhu, Hang Su, Ying Nian Wu, Jie Ren, Hao Zhang

This workshop pays a special interest in theoretic foundations, limitations, and new application trends in the scope of XAI.

Interpretable Compositional Convolutional Neural Networks

1 code implementation9 Jul 2021 Wen Shen, Zhihua Wei, Shikun Huang, BinBin Zhang, Jiaqi Fan, Ping Zhao, Quanshi Zhang

The reasonable definition of semantic interpretability presents the core challenge in explainable AI.

A Game-Theoretic Taxonomy of Visual Concepts in DNNs

no code implementations21 Jun 2021 Xu Cheng, Chuntung Chu, Yi Zheng, Jie Ren, Quanshi Zhang

In this paper, we rethink how a DNN encodes visual concepts of different complexities from a new perspective, i. e. the game-theoretic multi-order interactions between pixels in an image.

Learning Baseline Values for Shapley Values

no code implementations22 May 2021 Jie Ren, Zhanpeng Zhou, Qirui Chen, Quanshi Zhang

In this paper, we revisit the feature representation of a deep model from the perspective of game theory, and define the multi-variate interaction patterns of input variables to define the no-signal state of an input variable.

A Unified Game-Theoretic Interpretation of Adversarial Robustness

1 code implementation12 Mar 2021 Jie Ren, Die Zhang, Yisen Wang, Lu Chen, Zhanpeng Zhou, Yiting Chen, Xu Cheng, Xin Wang, Meng Zhou, Jie Shi, Quanshi Zhang

This paper provides a unified view to explain different adversarial attacks and defense methods, i. e. the view of multi-order interactions between input variables of DNNs.

Adversarial Robustness

Towards Understanding and Improving Dropout in Game Theory

no code implementations ICLR 2021 Hao Zhang, Sen Li, Yinchao Ma, Mingjie Li, Yichen Xie, Quanshi Zhang

Experimental results on various DNNs and datasets have shown that the interaction loss can effectively improve the utility of dropout and boost the performance of DNNs.

Towards A Unified Understanding and Improving of Adversarial Transferability

no code implementations ICLR 2021 Xin Wang, Jie Ren, Shuyun Lin, Xiangming Zhu, Yisen Wang, Quanshi Zhang

We discover and prove the negative correlation between the adversarial transferability and the interaction inside adversarial perturbations.

Understanding, Analyzing, and Optimizing the Complexity of Deep Models

no code implementations1 Jan 2021 Jie Ren, Mingjie Li, Meng Zhou, Shih-Han Chan, Zexu Liu, Quanshi Zhang

Based on the proposed metrics, we analyze two typical phenomena of the change of the transformation complexity during the training process, and explore the ceiling of a DNN’s complexity.

Disentanglement

Technical Note: Game-Theoretic Interactions of Different Orders

no code implementations28 Oct 2020 Hao Zhang, Xu Cheng, Yiting Chen, Quanshi Zhang

In this study, we define interaction components of different orders between two input variables based on game theory.

Interpreting Multivariate Shapley Interactions in DNNs

no code implementations10 Oct 2020 Hao Zhang, Yichen Xie, Longjie Zheng, Die Zhang, Quanshi Zhang

In this paper, we define and quantify the significance of interactions among multiple input variables of the DNN.

A Unified Approach to Interpreting and Boosting Adversarial Transferability

1 code implementation8 Oct 2020 Xin Wang, Jie Ren, Shuyun Lin, Xiangming Zhu, Yisen Wang, Quanshi Zhang

We discover and prove the negative correlation between the adversarial transferability and the interaction inside adversarial perturbations.

Interpreting and Boosting Dropout from a Game-Theoretic View

no code implementations24 Sep 2020 Hao Zhang, Sen Li, Yinchao Ma, Mingjie Li, Yichen Xie, Quanshi Zhang

This paper aims to understand and improve the utility of the dropout operation from the perspective of game-theoretic interactions.

Achieving Adversarial Robustness via Sparsity

no code implementations11 Sep 2020 Shufan Wang, Ningyi Liao, Liyao Xiang, Nanyang Ye, Quanshi Zhang

Through experiments on a variety of adversarial pruning methods, we find that weights sparsity will not hurt but improve robustness, where both weights inheritance from the lottery ticket and adversarial training improve model robustness in network pruning.

Adversarial Robustness Network Pruning

Interpreting and Disentangling Feature Components of Various Complexity from DNNs

1 code implementation29 Jun 2020 Jie Ren, Mingjie Li, Zexu Liu, Quanshi Zhang

This paper aims to define, quantify, and analyze the feature complexity that is learned by a DNN.

Knowledge Distillation

Building Interpretable Interaction Trees for Deep NLP Models

no code implementations29 Jun 2020 Die Zhang, Huilin Zhou, Hao Zhang, Xiaoyi Bao, Da Huo, Ruizhao Chen, Xu Cheng, Mengyue Wu, Quanshi Zhang

This paper proposes a method to disentangle and quantify interactions among words that are encoded inside a DNN for natural language processing.

Rotation-Equivariant Neural Networks for Privacy Protection

no code implementations21 Jun 2020 Hao Zhang, Yiting Chen, Haotian Ma, Xu Cheng, Qihan Ren, Liyao Xiang, Jie Shi, Quanshi Zhang

Compared to the traditional neural network, the RENN uses d-ary vectors/tensors as features, in which each element is a d-ary number.

Deep Quaternion Features for Privacy Protection

no code implementations18 Mar 2020 Hao Zhang, Yi-Ting Chen, Liyao Xiang, Haotian Ma, Jie Shi, Quanshi Zhang

We propose a method to revise the neural network to construct the quaternion-valued neural network (QNN), in order to prevent intermediate-layer features from leaking input information.

Learning to Prevent Leakage: Privacy-Preserving Inference in the Mobile Cloud

no code implementations18 Dec 2019 Shuang Zhang, Liyao Xiang, CongCong Li, YiXuan Wang, Quanshi Zhang, Wei Wang, Bo Li

Powered by machine learning services in the cloud, numerous learning-driven mobile applications are gaining popularity in the market.

Neural Architecture Search Privacy Preserving Deep Learning

Towards a Unified Evaluation of Explanation Methods without Ground Truth

no code implementations20 Nov 2019 Hao Zhang, Jiayi Chen, Haotian Xue, Quanshi Zhang

This paper proposes a set of criteria to evaluate the objectiveness of explanation methods of neural networks, which is crucial for the development of explainable AI, but it also presents significant challenges.

Verifiability and Predictability: Interpreting Utilities of Network Architectures for Point Cloud Processing

1 code implementation CVPR 2021 Wen Shen, Zhihua Wei, Shikun Huang, BinBin Zhang, Panyue Chen, Ping Zhao, Quanshi Zhang

In this paper, we diagnose deep neural networks for 3D point cloud processing to explore utilities of different intermediate-layer network architectures.

Adversarial Robustness

3D-Rotation-Equivariant Quaternion Neural Networks

1 code implementation ECCV 2020 Wen Shen, BinBin Zhang, Shikun Huang, Zhihua Wei, Quanshi Zhang

This paper proposes a set of rules to revise various neural networks for 3D point cloud processing to rotation-equivariant quaternion neural networks (REQNNs).

Quantifying Layerwise Information Discarding of Neural Networks

no code implementations10 Jun 2019 Haotian Ma, Yinqing Zhang, Fan Zhou, Quanshi Zhang

This paper presents a method to explain how input information is discarded through intermediate layers of a neural network during the forward propagation, in order to quantify and diagnose knowledge representations of pre-trained deep neural networks.

Interpretable Complex-Valued Neural Networks for Privacy Protection

1 code implementation ICLR 2020 Liyao Xiang, Haotian Ma, Hao Zhang, Yifan Zhang, Jie Ren, Quanshi Zhang

Previous studies have found that an adversary attacker can often infer unintended input information from intermediate-layer features.

Proceedings of AAAI 2019 Workshop on Network Interpretability for Deep Learning

no code implementations25 Jan 2019 Quanshi Zhang, Lixin Fan, Bolei Zhou

This is the Proceedings of AAAI 2019 Workshop on Network Interpretability for Deep Learning

Network Transplanting (extended abstract)

no code implementations21 Jan 2019 Quanshi Zhang, Yu Yang, Qian Yu, Ying Nian Wu

This paper focuses on a new task, i. e., transplanting a category-and-task-specific neural network to a generic, modular network without strong supervision.

Unsupervised Learning of Neural Networks to Explain Neural Networks (extended abstract)

no code implementations21 Jan 2019 Quanshi Zhang, Yu Yang, Ying Nian Wu

This paper presents an unsupervised method to learn a neural network, namely an explainer, to interpret a pre-trained convolutional neural network (CNN), i. e., the explainer uses interpretable visual concepts to explain features in middle conv-layers of a CNN.

Knowledge Distillation

Explaining AlphaGo: Interpreting Contextual Effects in Neural Networks

no code implementations8 Jan 2019 Zenan Ling, Haotian Ma, Yu Yang, Robert C. Qiu, Song-Chun Zhu, Quanshi Zhang

In this paper, we propose to disentangle and interpret contextual effects that are encoded in a pre-trained deep neural network.

Interpretable CNNs for Object Classification

no code implementations8 Jan 2019 Quanshi Zhang, Xin Wang, Ying Nian Wu, Huilin Zhou, Song-Chun Zhu

This paper proposes a generic method to learn interpretable convolutional filters in a deep convolutional neural network (CNN) for object classification, where each interpretable filter encodes features of a specific object part.

Classification General Classification

Explanatory Graphs for CNNs

no code implementations18 Dec 2018 Quanshi Zhang, Xin Wang, Ruiming Cao, Ying Nian Wu, Feng Shi, Song-Chun Zhu

This paper introduces a graphical model, namely an explanatory graph, which reveals the knowledge hierarchy hidden inside conv-layers of a pre-trained CNN.

Explaining Neural Networks Semantically and Quantitatively

no code implementations ICCV 2019 Runjin Chen, Hao Chen, Ge Huang, Jie Ren, Quanshi Zhang

This paper presents a method to explain the knowledge encoded in a convolutional neural network (CNN) quantitatively and semantically.

Unsupervised Learning of Neural Networks to Explain Neural Networks

no code implementations18 May 2018 Quanshi Zhang, Yu Yang, Yuchen Liu, Ying Nian Wu, Song-Chun Zhu

Given feature maps of a certain conv-layer of the CNN, the explainer performs like an auto-encoder, which first disentangles the feature maps into object-part features and then inverts object-part features back to features of higher conv-layers of the CNN.

Disentanglement

Network Transplanting

no code implementations26 Apr 2018 Quanshi Zhang, Yu Yang, Qian Yu, Ying Nian Wu

This paper focuses on a new task, i. e., transplanting a category-and-task-specific neural network to a generic, modular network without strong supervision.

Visual Interpretability for Deep Learning: a Survey

1 code implementation2 Feb 2018 Quanshi Zhang, Song-Chun Zhu

This paper reviews recent studies in understanding neural-network representations and learning neural networks with interpretable/disentangled middle-layer representations.

Explainable artificial intelligence

Interpreting CNNs via Decision Trees

no code implementations CVPR 2019 Quanshi Zhang, Yu Yang, Haotian Ma, Ying Nian Wu

We propose to learn a decision tree, which clarifies the specific reason for each prediction made by the CNN at the semantic level.

Examining CNN Representations with respect to Dataset Bias

no code implementations29 Oct 2017 Quanshi Zhang, Wenguan Wang, Song-Chun Zhu

We aim to discover representation flaws caused by potential dataset bias.

Interpretable Convolutional Neural Networks

2 code implementations CVPR 2018 Quanshi Zhang, Ying Nian Wu, Song-Chun Zhu

Instead, the interpretable CNN automatically assigns each filter in a high conv-layer with an object part during the learning process.

single catogory classification

Mining Deep And-Or Object Structures via Cost-Sensitive Question-Answer-Based Active Annotations

no code implementations13 Aug 2017 Quanshi Zhang, Ying Nian Wu, Hao Zhang, Song-Chun Zhu

The loss is defined for nodes in all layers of the AOG, including the generative loss (measuring the likelihood of the images) and the discriminative loss (measuring the fitness to human answers).

Question Answering

Visual Graph Mining

no code implementations13 Aug 2017 Quanshi Zhang, Xuan Song, Ryosuke Shibasaki

In this study, we formulate the concept of "mining maximal-size frequent subgraphs" in the challenging domain of visual data (images and videos).

Graph Mining

Interpreting CNN Knowledge via an Explanatory Graph

no code implementations5 Aug 2017 Quanshi Zhang, Ruiming Cao, Feng Shi, Ying Nian Wu, Song-Chun Zhu

Considering that each filter in a conv-layer of a pre-trained CNN usually represents a mixture of object parts, we propose a simple yet efficient method to automatically disentangles different part patterns from each filter, and construct an explanatory graph.

Interactively Transferring CNN Patterns for Part Localization

no code implementations5 Aug 2017 Quanshi Zhang, Ruiming Cao, Shengming Zhang, Mark Redmonds, Ying Nian Wu, Song-Chun Zhu

In the scenario of one/multi-shot learning, conventional end-to-end learning strategies without sufficient supervision are usually not powerful enough to learn correct patterns from noisy signals.

Growing Interpretable Part Graphs on ConvNets via Multi-Shot Learning

no code implementations14 Nov 2016 Quanshi Zhang, Ruiming Cao, Ying Nian Wu, Song-Chun Zhu

This paper proposes a learning strategy that extracts object-part concepts from a pre-trained convolutional neural network (CNN), in an attempt to 1) explore explicit semantics hidden in CNN units and 2) gradually grow a semantically interpretable graphical model on the pre-trained CNN for hierarchical object understanding.

Mining And-Or Graphs for Graph Matching and Object Discovery

no code implementations ICCV 2015 Quanshi Zhang, Ying Nian Wu, Song-Chun Zhu

This paper reformulates the theory of graph mining on the technical basis of graph matching, and extends its scope of applications to computer vision.

Graph Matching Graph Mining +1

Category Modeling from Just a Single Labeling: Use Depth Information to Guide the Learning of 2D Models

no code implementations CVPR 2013 Quanshi Zhang, Xuan Song, Xiaowei Shao, Ryosuke Shibasaki, Huijing Zhao

We design a graphical model that uses object edges to represent object structures, and this paper aims to incrementally learn this category model from one labeled object and a number of casually captured scenes.

Object Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.