Search Results for author: Quanshi Zhang

Found 88 papers, 24 papers with code

Disentangling Regional Primitives for Image Generation

no code implementations6 Oct 2024 Zhengting Chen, Lei Cheng, Lianghui Ding, Quanshi Zhang

We find that the feature component can be represented as an OR relationship between the demands for generating different image regions, which is encoded by the neural network.

Image Generation

Layerwise Change of Knowledge in Neural Networks

no code implementations13 Sep 2024 Xu Cheng, Lei Cheng, Zhaoran Peng, Yang Xu, Tian Han, Quanshi Zhang

This paper aims to explain how a deep neural network (DNN) gradually extracts new knowledge and forgets noisy features through layers in forward propagation.

Towards the Dynamics of a DNN Learning Symbolic Interactions

no code implementations27 Jul 2024 Qihan Ren, Junpeng Zhang, Yang Xu, Yue Xin, Dongrui Liu, Quanshi Zhang

This study proves the two-phase dynamics of a deep neural network (DNN) learning interactions.

Quantifying In-Context Reasoning Effects and Memorization Effects in LLMs

no code implementations20 May 2024 Siyu Lou, Yuntian Chen, Xiaodan Liang, Liang Lin, Quanshi Zhang

In this study, we propose an axiomatic system to define and quantify the precise memorization and in-context reasoning effects used by the large language model (LLM) for language generation.

Disentanglement Language Modelling +3

Identifying Semantic Induction Heads to Understand In-Context Learning

no code implementations20 Feb 2024 Jie Ren, Qipeng Guo, Hang Yan, Dongrui Liu, Quanshi Zhang, Xipeng Qiu, Dahua Lin

Although large language models (LLMs) have demonstrated remarkable performance, the lack of transparency in their inference logic raises concerns about their trustworthiness.

In-Context Learning Knowledge Graphs

Defining and Extracting generalizable interaction primitives from DNNs

1 code implementation29 Jan 2024 Lu Chen, Siyu Lou, Benhao Huang, Quanshi Zhang

Faithfully summarizing the knowledge encoded by a deep neural network (DNN) into a few symbolic primitive patterns without losing much information represents a core challenge in explainable AI.

Explaining How a Neural Network Play the Go Game and Let People Learn

no code implementations15 Oct 2023 Huilin Zhou, Huijie Tang, Mingjie Li, Hao Zhang, Zhenyu Liu, Quanshi Zhang

The AI model has surpassed human players in the game of Go, and it is widely believed that the AI model has encoded new knowledge about the Go game beyond human players.

Game of Go

Towards Attributions of Input Variables in a Coalition

no code implementations23 Sep 2023 Xinhao Zheng, Huiqi Deng, Bo Fan, Quanshi Zhang

This paper aims to develop a new attribution method to explain the conflict between individual variables' attributions and their coalition's attribution from a fully new perspective.

Where We Have Arrived in Proving the Emergence of Sparse Symbolic Concepts in AI Models

1 code implementation3 May 2023 Qihan Ren, Jiayang Gao, Wen Shen, Quanshi Zhang

These conditions are quite common, and we prove that under these conditions, the DNN will only encode a relatively small number of sparse interactions between input variables.

Technical Note: Defining and Quantifying AND-OR Interactions for Faithful and Concise Explanation of DNNs

1 code implementation26 Apr 2023 Mingjie Li, Quanshi Zhang

For faithfulness, we prove the uniqueness of the AND (OR) interaction in quantifying the effect of the AND (OR) relationship between input variables.

HarsanyiNet: Computing Accurate Shapley Values in a Single Forward Propagation

1 code implementation4 Apr 2023 Lu Chen, Siyu Lou, Keyan Zhang, Jin Huang, Quanshi Zhang

The HarsanyiNet is designed on the theoretical foundation that the Shapley value can be reformulated as the redistribution of Harsanyi interactions encoded by the network.

Can the Inference Logic of Large Language Models be Disentangled into Symbolic Concepts?

no code implementations3 Apr 2023 Wen Shen, Lei Cheng, Yuxiao Yang, Mingjie Li, Quanshi Zhang

In this paper, we explain the inference logic of large language models (LLMs) as a set of symbolic concepts.

Sentence

Understanding and Unifying Fourteen Attribution Methods with Taylor Interactions

no code implementations2 Mar 2023 Huiqi Deng, Na Zou, Mengnan Du, Weifu Chen, Guocan Feng, Ziwei Yang, Zheyang Li, Quanshi Zhang

Various attribution methods have been developed to explain deep neural networks (DNNs) by inferring the attribution/importance/contribution score of each input variable to the final output.

Explaining Generalization Power of a DNN Using Interactive Concepts

no code implementations25 Feb 2023 Huilin Zhou, Hao Zhang, Huiqi Deng, Dongrui Liu, Wen Shen, Shih-Han Chan, Quanshi Zhang

Although there is no universally accepted definition of the concepts encoded by a DNN, the sparsity of interactions in a DNN has been proved, i. e., the output score of a DNN can be well explained by a small number of interactions between input variables.

Does a Neural Network Really Encode Symbolic Concepts?

1 code implementation25 Feb 2023 Mingjie Li, Quanshi Zhang

Recently, a series of studies have tried to extract interactions between input variables modeled by a DNN and define such interactions as concepts encoded by the DNN.

Bayesian Neural Networks Avoid Encoding Complex and Perturbation-Sensitive Concepts

1 code implementation25 Feb 2023 Qihan Ren, Huiqi Deng, Yunuo Chen, Siyu Lou, Quanshi Zhang

In this paper, we focus on mean-field variational Bayesian Neural Networks (BNNs) and explore the representation capacity of such BNNs by investigating which types of concepts are less likely to be encoded by the BNN.

Defects of Convolutional Decoder Networks in Frequency Representation

no code implementations17 Oct 2022 Ling Tang, Wen Shen, Zhanpeng Zhou, Yuefeng Chen, Quanshi Zhang

In this paper, we prove the representation defects of a cascaded convolutional decoder network, considering the capacity of representing different frequency components of an input sample.

Decoder

Quantifying the Knowledge in a DNN to Explain Knowledge Distillation for Classification

no code implementations18 Aug 2022 Quanshi Zhang, Xu Cheng, Yilan Chen, Zhefan Rao

This paper provides a new perspective to explain the success of knowledge distillation, i. e., quantifying knowledge points encoded in intermediate layers of a DNN for classification, based on the information theory.

3D Point Cloud Classification Classification +6

Proving Common Mechanisms Shared by Twelve Methods of Boosting Adversarial Transferability

no code implementations24 Jul 2022 Quanshi Zhang, Xin Wang, Jie Ren, Xu Cheng, Shuyun Lin, Yisen Wang, Xiangming Zhu

This paper summarizes the common mechanism shared by twelve previous transferability-boosting methods in a unified view, i. e., these methods all reduce game-theoretic interactions between regional adversarial perturbations.

Batch Normalization Is Blind to the First and Second Derivatives of the Loss

no code implementations30 May 2022 Zhanpeng Zhou, Wen Shen, Huixin Chen, Ling Tang, Quanshi Zhang

In this paper, we prove the effects of the BN operation on the back-propagation of the first and second derivatives of the loss.

Why Adversarial Training of ReLU Networks Is Difficult?

no code implementations30 May 2022 Xu Cheng, Hao Zhang, Yue Xin, Wen Shen, Jie Ren, Quanshi Zhang

We also prove that adversarial training tends to strengthen the influence of unconfident input samples with large gradient norms in an exponential manner.

RASAT: Integrating Relational Structures into Pretrained Seq2Seq Model for Text-to-SQL

1 code implementation14 May 2022 Jiexing Qi, Jingyao Tang, Ziwei He, Xiangpeng Wan, Yu Cheng, Chenghu Zhou, Xinbing Wang, Quanshi Zhang, Zhouhan Lin

Our model can incorporate almost all types of existing relations in the literature, and in addition, we propose introducing co-reference relations for the multi-turn scenario.

Dialogue State Tracking Text-To-SQL

Towards Theoretical Analysis of Transformation Complexity of ReLU DNNs

1 code implementation4 May 2022 Jie Ren, Mingjie Li, Meng Zhou, Shih-Han Chan, Quanshi Zhang

Based on the proposed metrics, we analyze two typical phenomena of the change of the transformation complexity during the training process, and explore the ceiling of a DNN's complexity.

Adversarial Robustness Disentanglement

Trap of Feature Diversity in the Learning of MLPs

no code implementations2 Dec 2021 Dongrui Liu, Shaobo Wang, Jie Ren, Kangrui Wang, Sheng Yin, Huiqi Deng, Quanshi Zhang

In this paper, we focus on a typical two-phase phenomenon in the learning of multi-layer perceptrons (MLPs), and we aim to explain the reason for the decrease of feature diversity in the first phase.

Diversity

Towards a Unified Game-Theoretic View of Adversarial Perturbations and Robustness

1 code implementation NeurIPS 2021 Jie Ren, Die Zhang, Yisen Wang, Lu Chen, Zhanpeng Zhou, Yiting Chen, Xu Cheng, Xin Wang, Meng Zhou, Jie Shi, Quanshi Zhang

This paper provides a unified view to explain different adversarial attacks and defense methods, i. e. the view of multi-order interactions between input variables of DNNs.

Adversarial Robustness

Discovering and Explaining the Representation Bottleneck of DNNs

2 code implementations ICLR 2022 Huiqi Deng, Qihan Ren, Hao Zhang, Quanshi Zhang

This paper explores the bottleneck of feature representations of deep neural networks (DNNs), from the perspective of the complexity of interactions between input variables encoded in DNNs.

Visualizing the Emergence of Intermediate Visual Patterns in DNNs

no code implementations NeurIPS 2021 Mingjie Li, Shaobo Wang, Quanshi Zhang

This paper proposes a method to visualize the discrimination power of intermediate-layer visual patterns encoded by a DNN.

Knowledge Distillation

Interpreting Representation Quality of DNNs for 3D Point Cloud Processing

no code implementations NeurIPS 2021 Wen Shen, Qihan Ren, Dongrui Liu, Quanshi Zhang

In this paper, we evaluate the quality of knowledge representations encoded in deep neural networks (DNNs) for 3D point cloud processing.

Translation

A Unified Game-Theoretic Interpretation of Adversarial Robustness

1 code implementation5 Nov 2021 Jie Ren, Die Zhang, Yisen Wang, Lu Chen, Zhanpeng Zhou, Yiting Chen, Xu Cheng, Xin Wang, Meng Zhou, Jie Shi, Quanshi Zhang

This paper provides a unified view to explain different adversarial attacks and defense methods, \emph{i. e.} the view of multi-order interactions between input variables of DNNs.

Adversarial Robustness

A HYPOTHESIS FOR THE COGNITIVE DIFFICULTY OF IMAGES

no code implementations29 Sep 2021 Xu Cheng, Xin Wang, Haotian Xue, Zhengyang Liang, Xin Jin, Quanshi Zhang

This paper proposes a hypothesis to analyze the underlying reason for the cognitive difficulty of an image from two perspectives, i. e. a cognitive image usually makes a DNN strongly activated by cognitive concepts; discarding massive non-cognitive concepts may also help the DNN focus on cognitive concepts.

Dissecting Local Properties of Adversarial Examples

no code implementations29 Sep 2021 Lu Chen, Renjie Chen, Hang Guo, Yuan Luo, Quanshi Zhang, Yisen Wang

Adversarial examples have attracted significant attention over the years, yet a sufficient understanding is in lack, especially when analyzing their performances in combination with adversarial training.

Adversarial Robustness

Towards a Game-Theoretic View of Baseline Values in the Shapley Value

no code implementations29 Sep 2021 Jie Ren, Zhanpeng Zhou, Qirui Chen, Quanshi Zhang

In the computation of Shapley values, people usually set an input variable to its baseline value to represent the absence of this variable.

Interpreting Attributions and Interactions of Adversarial Attacks

no code implementations ICCV 2021 Xin Wang, Shuyun Lin, Hao Zhang, Yufei Zhu, Quanshi Zhang

This paper aims to explain adversarial attacks in terms of how adversarial perturbations contribute to the attacking task.

A Hypothesis for the Aesthetic Appreciation in Neural Networks

no code implementations31 Jul 2021 Xu Cheng, Xin Wang, Haotian Xue, Zhengyang Liang, Quanshi Zhang

This paper proposes a hypothesis for the aesthetic appreciation that aesthetic images make a neural network strengthen salient concepts and discard inessential concepts.

Interpretable Compositional Convolutional Neural Networks

1 code implementation9 Jul 2021 Wen Shen, Zhihua Wei, Shikun Huang, BinBin Zhang, Jiaqi Fan, Ping Zhao, Quanshi Zhang

The reasonable definition of semantic interpretability presents the core challenge in explainable AI.

A Game-Theoretic Taxonomy of Visual Concepts in DNNs

no code implementations21 Jun 2021 Xu Cheng, Chuntung Chu, Yi Zheng, Jie Ren, Quanshi Zhang

In this paper, we rethink how a DNN encodes visual concepts of different complexities from a new perspective, i. e. the game-theoretic multi-order interactions between pixels in an image.

Can We Faithfully Represent Masked States to Compute Shapley Values on a DNN?

1 code implementation22 May 2021 Jie Ren, Zhanpeng Zhou, Qirui Chen, Quanshi Zhang

Masking some input variables of a deep neural network (DNN) and computing output changes on the masked input sample represent a typical way to compute attributions of input variables in the sample.

A Unified Game-Theoretic Interpretation of Adversarial Robustness

1 code implementation12 Mar 2021 Jie Ren, Die Zhang, Yisen Wang, Lu Chen, Zhanpeng Zhou, Yiting Chen, Xu Cheng, Xin Wang, Meng Zhou, Jie Shi, Quanshi Zhang

This paper provides a unified view to explain different adversarial attacks and defense methods, i. e. the view of multi-order interactions between input variables of DNNs.

Adversarial Robustness

Towards A Unified Understanding and Improving of Adversarial Transferability

no code implementations ICLR 2021 Xin Wang, Jie Ren, Shuyun Lin, Xiangming Zhu, Yisen Wang, Quanshi Zhang

We discover and prove the negative correlation between the adversarial transferability and the interaction inside adversarial perturbations.

Towards Understanding and Improving Dropout in Game Theory

no code implementations ICLR 2021 Hao Zhang, Sen Li, Yinchao Ma, Mingjie Li, Yichen Xie, Quanshi Zhang

Experimental results on various DNNs and datasets have shown that the interaction loss can effectively improve the utility of dropout and boost the performance of DNNs.

Understanding, Analyzing, and Optimizing the Complexity of Deep Models

no code implementations1 Jan 2021 Jie Ren, Mingjie Li, Meng Zhou, Shih-Han Chan, Zexu Liu, Quanshi Zhang

Based on the proposed metrics, we analyze two typical phenomena of the change of the transformation complexity during the training process, and explore the ceiling of a DNN’s complexity.

Disentanglement

Technical Note: Game-Theoretic Interactions of Different Orders

no code implementations28 Oct 2020 Hao Zhang, Xu Cheng, Yiting Chen, Quanshi Zhang

In this study, we define interaction components of different orders between two input variables based on game theory.

Interpreting Multivariate Shapley Interactions in DNNs

no code implementations10 Oct 2020 Hao Zhang, Yichen Xie, Longjie Zheng, Die Zhang, Quanshi Zhang

In this paper, we define and quantify the significance of interactions among multiple input variables of the DNN.

A Unified Approach to Interpreting and Boosting Adversarial Transferability

1 code implementation8 Oct 2020 Xin Wang, Jie Ren, Shuyun Lin, Xiangming Zhu, Yisen Wang, Quanshi Zhang

We discover and prove the negative correlation between the adversarial transferability and the interaction inside adversarial perturbations.

Interpreting and Boosting Dropout from a Game-Theoretic View

no code implementations24 Sep 2020 Hao Zhang, Sen Li, Yinchao Ma, Mingjie Li, Yichen Xie, Quanshi Zhang

This paper aims to understand and improve the utility of the dropout operation from the perspective of game-theoretic interactions.

Achieving Adversarial Robustness via Sparsity

no code implementations11 Sep 2020 Shufan Wang, Ningyi Liao, Liyao Xiang, Nanyang Ye, Quanshi Zhang

Through experiments on a variety of adversarial pruning methods, we find that weights sparsity will not hurt but improve robustness, where both weights inheritance from the lottery ticket and adversarial training improve model robustness in network pruning.

Adversarial Robustness Network Pruning

Building Interpretable Interaction Trees for Deep NLP Models

no code implementations29 Jun 2020 Die Zhang, Huilin Zhou, Hao Zhang, Xiaoyi Bao, Da Huo, Ruizhao Chen, Xu Cheng, Mengyue Wu, Quanshi Zhang

This paper proposes a method to disentangle and quantify interactions among words that are encoded inside a DNN for natural language processing.

Sentence

Interpreting and Disentangling Feature Components of Various Complexity from DNNs

1 code implementation29 Jun 2020 Jie Ren, Mingjie Li, Zexu Liu, Quanshi Zhang

This paper aims to define, quantify, and analyze the feature complexity that is learned by a DNN.

Knowledge Distillation

Rotation-Equivariant Neural Networks for Privacy Protection

no code implementations21 Jun 2020 Hao Zhang, Yiting Chen, Haotian Ma, Xu Cheng, Qihan Ren, Liyao Xiang, Jie Shi, Quanshi Zhang

Compared to the traditional neural network, the RENN uses d-ary vectors/tensors as features, in which each element is a d-ary number.

Attribute

Deep Quaternion Features for Privacy Protection

no code implementations18 Mar 2020 Hao Zhang, Yi-Ting Chen, Liyao Xiang, Haotian Ma, Jie Shi, Quanshi Zhang

We propose a method to revise the neural network to construct the quaternion-valued neural network (QNN), in order to prevent intermediate-layer features from leaking input information.

Privacy Preserving

Learning to Prevent Leakage: Privacy-Preserving Inference in the Mobile Cloud

no code implementations18 Dec 2019 Shuang Zhang, Liyao Xiang, CongCong Li, YiXuan Wang, Quanshi Zhang, Wei Wang, Bo Li

Powered by machine learning services in the cloud, numerous learning-driven mobile applications are gaining popularity in the market.

Neural Architecture Search Privacy Preserving +1

Towards a Unified Evaluation of Explanation Methods without Ground Truth

no code implementations20 Nov 2019 Hao Zhang, Jiayi Chen, Haotian Xue, Quanshi Zhang

This paper proposes a set of criteria to evaluate the objectiveness of explanation methods of neural networks, which is crucial for the development of explainable AI, but it also presents significant challenges.

Verifiability and Predictability: Interpreting Utilities of Network Architectures for Point Cloud Processing

1 code implementation CVPR 2021 Wen Shen, Zhihua Wei, Shikun Huang, BinBin Zhang, Panyue Chen, Ping Zhao, Quanshi Zhang

In this paper, we diagnose deep neural networks for 3D point cloud processing to explore utilities of different intermediate-layer network architectures.

Adversarial Robustness

3D-Rotation-Equivariant Quaternion Neural Networks

1 code implementation ECCV 2020 Wen Shen, BinBin Zhang, Shikun Huang, Zhihua Wei, Quanshi Zhang

This paper proposes a set of rules to revise various neural networks for 3D point cloud processing to rotation-equivariant quaternion neural networks (REQNNs).

Quantification and Analysis of Layer-wise and Pixel-wise Information Discarding

1 code implementation10 Jun 2019 Haotian Ma, Hao Zhang, Fan Zhou, Yinqing Zhang, Quanshi Zhang

We define two types of entropy-based metrics, i. e. (1) the discarding of pixel-wise information used in the forward propagation, and (2) the uncertainty of the input reconstruction, to measure input information contained by a specific layer from two perspectives.

Fairness

Interpretable Complex-Valued Neural Networks for Privacy Protection

1 code implementation ICLR 2020 Liyao Xiang, Haotian Ma, Hao Zhang, Yifan Zhang, Jie Ren, Quanshi Zhang

Previous studies have found that an adversary attacker can often infer unintended input information from intermediate-layer features.

Proceedings of AAAI 2019 Workshop on Network Interpretability for Deep Learning

no code implementations25 Jan 2019 Quanshi Zhang, Lixin Fan, Bolei Zhou

This is the Proceedings of AAAI 2019 Workshop on Network Interpretability for Deep Learning

Deep Learning

Network Transplanting (extended abstract)

no code implementations21 Jan 2019 Quanshi Zhang, Yu Yang, Qian Yu, Ying Nian Wu

This paper focuses on a new task, i. e., transplanting a category-and-task-specific neural network to a generic, modular network without strong supervision.

Unsupervised Learning of Neural Networks to Explain Neural Networks (extended abstract)

no code implementations21 Jan 2019 Quanshi Zhang, Yu Yang, Ying Nian Wu

This paper presents an unsupervised method to learn a neural network, namely an explainer, to interpret a pre-trained convolutional neural network (CNN), i. e., the explainer uses interpretable visual concepts to explain features in middle conv-layers of a CNN.

Knowledge Distillation Object

Explaining AlphaGo: Interpreting Contextual Effects in Neural Networks

no code implementations8 Jan 2019 Zenan Ling, Haotian Ma, Yu Yang, Robert C. Qiu, Song-Chun Zhu, Quanshi Zhang

In this paper, we propose to disentangle and interpret contextual effects that are encoded in a pre-trained deep neural network.

Interpretable CNNs for Object Classification

no code implementations8 Jan 2019 Quanshi Zhang, Xin Wang, Ying Nian Wu, Huilin Zhou, Song-Chun Zhu

This paper proposes a generic method to learn interpretable convolutional filters in a deep convolutional neural network (CNN) for object classification, where each interpretable filter encodes features of a specific object part.

Classification General Classification +1

Explanatory Graphs for CNNs

no code implementations18 Dec 2018 Quanshi Zhang, Xin Wang, Ruiming Cao, Ying Nian Wu, Feng Shi, Song-Chun Zhu

This paper introduces a graphical model, namely an explanatory graph, which reveals the knowledge hierarchy hidden inside conv-layers of a pre-trained CNN.

Object

Explaining Neural Networks Semantically and Quantitatively

no code implementations ICCV 2019 Runjin Chen, Hao Chen, Ge Huang, Jie Ren, Quanshi Zhang

This paper presents a method to explain the knowledge encoded in a convolutional neural network (CNN) quantitatively and semantically.

Unsupervised Learning of Neural Networks to Explain Neural Networks

no code implementations18 May 2018 Quanshi Zhang, Yu Yang, Yuchen Liu, Ying Nian Wu, Song-Chun Zhu

Given feature maps of a certain conv-layer of the CNN, the explainer performs like an auto-encoder, which first disentangles the feature maps into object-part features and then inverts object-part features back to features of higher conv-layers of the CNN.

Disentanglement Object

Network Transplanting

no code implementations26 Apr 2018 Quanshi Zhang, Yu Yang, Qian Yu, Ying Nian Wu

This paper focuses on a new task, i. e., transplanting a category-and-task-specific neural network to a generic, modular network without strong supervision.

Visual Interpretability for Deep Learning: a Survey

1 code implementation2 Feb 2018 Quanshi Zhang, Song-Chun Zhu

This paper reviews recent studies in understanding neural-network representations and learning neural networks with interpretable/disentangled middle-layer representations.

Deep Learning Explainable artificial intelligence +1

Interpreting CNNs via Decision Trees

no code implementations CVPR 2019 Quanshi Zhang, Yu Yang, Haotian Ma, Ying Nian Wu

We propose to learn a decision tree, which clarifies the specific reason for each prediction made by the CNN at the semantic level.

Object

Examining CNN Representations with respect to Dataset Bias

no code implementations29 Oct 2017 Quanshi Zhang, Wenguan Wang, Song-Chun Zhu

We aim to discover representation flaws caused by potential dataset bias.

Attribute

Interpretable Convolutional Neural Networks

2 code implementations CVPR 2018 Quanshi Zhang, Ying Nian Wu, Song-Chun Zhu

Instead, the interpretable CNN automatically assigns each filter in a high conv-layer with an object part during the learning process.

Object single catogory classification

Mining Deep And-Or Object Structures via Cost-Sensitive Question-Answer-Based Active Annotations

no code implementations13 Aug 2017 Quanshi Zhang, Ying Nian Wu, Hao Zhang, Song-Chun Zhu

The loss is defined for nodes in all layers of the AOG, including the generative loss (measuring the likelihood of the images) and the discriminative loss (measuring the fitness to human answers).

Question Answering

Visual Graph Mining

no code implementations13 Aug 2017 Quanshi Zhang, Xuan Song, Ryosuke Shibasaki

In this study, we formulate the concept of "mining maximal-size frequent subgraphs" in the challenging domain of visual data (images and videos).

Graph Mining

Interpreting CNN Knowledge via an Explanatory Graph

no code implementations5 Aug 2017 Quanshi Zhang, Ruiming Cao, Feng Shi, Ying Nian Wu, Song-Chun Zhu

Considering that each filter in a conv-layer of a pre-trained CNN usually represents a mixture of object parts, we propose a simple yet efficient method to automatically disentangles different part patterns from each filter, and construct an explanatory graph.

Object

Interactively Transferring CNN Patterns for Part Localization

no code implementations5 Aug 2017 Quanshi Zhang, Ruiming Cao, Shengming Zhang, Mark Redmonds, Ying Nian Wu, Song-Chun Zhu

In the scenario of one/multi-shot learning, conventional end-to-end learning strategies without sufficient supervision are usually not powerful enough to learn correct patterns from noisy signals.

Mining Object Parts from CNNs via Active Question-Answering

no code implementations CVPR 2017 Quanshi Zhang, Ruiming Cao, Ying Nian Wu, Song-Chun Zhu

We use an active human-computer communication to incrementally grow such an AOG on the pre-trained CNN as follows.

Active Learning Object +1

Growing Interpretable Part Graphs on ConvNets via Multi-Shot Learning

no code implementations14 Nov 2016 Quanshi Zhang, Ruiming Cao, Ying Nian Wu, Song-Chun Zhu

This paper proposes a learning strategy that extracts object-part concepts from a pre-trained convolutional neural network (CNN), in an attempt to 1) explore explicit semantics hidden in CNN units and 2) gradually grow a semantically interpretable graphical model on the pre-trained CNN for hierarchical object understanding.

Mining And-Or Graphs for Graph Matching and Object Discovery

no code implementations ICCV 2015 Quanshi Zhang, Ying Nian Wu, Song-Chun Zhu

This paper reformulates the theory of graph mining on the technical basis of graph matching, and extends its scope of applications to computer vision.

Graph Matching Graph Mining +1

Category Modeling from Just a Single Labeling: Use Depth Information to Guide the Learning of 2D Models

no code implementations CVPR 2013 Quanshi Zhang, Xuan Song, Xiaowei Shao, Ryosuke Shibasaki, Huijing Zhao

We design a graphical model that uses object edges to represent object structures, and this paper aims to incrementally learn this category model from one labeled object and a number of casually captured scenes.

Object object-detection +1

Cannot find the paper you are looking for? You can Submit a new open access paper.