no code implementations • 6 Oct 2024 • Zhengting Chen, Lei Cheng, Lianghui Ding, Quanshi Zhang
We find that the feature component can be represented as an OR relationship between the demands for generating different image regions, which is encoded by the neural network.
no code implementations • 13 Sep 2024 • Xu Cheng, Lei Cheng, Zhaoran Peng, Yang Xu, Tian Han, Quanshi Zhang
This paper aims to explain how a deep neural network (DNN) gradually extracts new knowledge and forgets noisy features through layers in forward propagation.
no code implementations • 27 Jul 2024 • Qihan Ren, Junpeng Zhang, Yang Xu, Yue Xin, Dongrui Liu, Quanshi Zhang
This study proves the two-phase dynamics of a deep neural network (DNN) learning interactions.
no code implementations • 20 May 2024 • Siyu Lou, Yuntian Chen, Xiaodan Liang, Liang Lin, Quanshi Zhang
In this study, we propose an axiomatic system to define and quantify the precise memorization and in-context reasoning effects used by the large language model (LLM) for language generation.
no code implementations • 16 May 2024 • Junpeng Zhang, Qing Li, Liang Lin, Quanshi Zhang
This paper investigates the dynamics of a deep neural network (DNN) learning interactions.
no code implementations • 20 Feb 2024 • Jie Ren, Qipeng Guo, Hang Yan, Dongrui Liu, Quanshi Zhang, Xipeng Qiu, Dahua Lin
Although large language models (LLMs) have demonstrated remarkable performance, the lack of transparency in their inference logic raises concerns about their trustworthiness.
1 code implementation • 29 Jan 2024 • Lu Chen, Siyu Lou, Benhao Huang, Quanshi Zhang
Faithfully summarizing the knowledge encoded by a deep neural network (DNN) into a few symbolic primitive patterns without losing much information represents a core challenge in explainable AI.
no code implementations • 15 Oct 2023 • Huilin Zhou, Huijie Tang, Mingjie Li, Hao Zhang, Zhenyu Liu, Quanshi Zhang
The AI model has surpassed human players in the game of Go, and it is widely believed that the AI model has encoded new knowledge about the Go game beyond human players.
no code implementations • 23 Sep 2023 • Xinhao Zheng, Huiqi Deng, Bo Fan, Quanshi Zhang
This paper aims to develop a new attribution method to explain the conflict between individual variables' attributions and their coalition's attribution from a fully new perspective.
1 code implementation • 3 May 2023 • Qihan Ren, Jiayang Gao, Wen Shen, Quanshi Zhang
These conditions are quite common, and we prove that under these conditions, the DNN will only encode a relatively small number of sparse interactions between input variables.
1 code implementation • 26 Apr 2023 • Mingjie Li, Quanshi Zhang
For faithfulness, we prove the uniqueness of the AND (OR) interaction in quantifying the effect of the AND (OR) relationship between input variables.
1 code implementation • 4 Apr 2023 • Lu Chen, Siyu Lou, Keyan Zhang, Jin Huang, Quanshi Zhang
The HarsanyiNet is designed on the theoretical foundation that the Shapley value can be reformulated as the redistribution of Harsanyi interactions encoded by the network.
no code implementations • 3 Apr 2023 • Wen Shen, Lei Cheng, Yuxiao Yang, Mingjie Li, Quanshi Zhang
In this paper, we explain the inference logic of large language models (LLMs) as a set of symbolic concepts.
no code implementations • 2 Mar 2023 • Huiqi Deng, Na Zou, Mengnan Du, Weifu Chen, Guocan Feng, Ziwei Yang, Zheyang Li, Quanshi Zhang
Various attribution methods have been developed to explain deep neural networks (DNNs) by inferring the attribution/importance/contribution score of each input variable to the final output.
no code implementations • 25 Feb 2023 • Huilin Zhou, Hao Zhang, Huiqi Deng, Dongrui Liu, Wen Shen, Shih-Han Chan, Quanshi Zhang
Although there is no universally accepted definition of the concepts encoded by a DNN, the sparsity of interactions in a DNN has been proved, i. e., the output score of a DNN can be well explained by a small number of interactions between input variables.
1 code implementation • 25 Feb 2023 • Mingjie Li, Quanshi Zhang
Recently, a series of studies have tried to extract interactions between input variables modeled by a DNN and define such interactions as concepts encoded by the DNN.
1 code implementation • 25 Feb 2023 • Qihan Ren, Huiqi Deng, Yunuo Chen, Siyu Lou, Quanshi Zhang
In this paper, we focus on mean-field variational Bayesian Neural Networks (BNNs) and explore the representation capacity of such BNNs by investigating which types of concepts are less likely to be encoded by the BNN.
no code implementations • 17 Oct 2022 • Ling Tang, Wen Shen, Zhanpeng Zhou, Yuefeng Chen, Quanshi Zhang
In this paper, we prove the representation defects of a cascaded convolutional decoder network, considering the capacity of representing different frequency components of an input sample.
no code implementations • 18 Aug 2022 • Quanshi Zhang, Xu Cheng, Yilan Chen, Zhefan Rao
This paper provides a new perspective to explain the success of knowledge distillation, i. e., quantifying knowledge points encoded in intermediate layers of a DNN for classification, based on the information theory.
no code implementations • 24 Jul 2022 • Quanshi Zhang, Xin Wang, Jie Ren, Xu Cheng, Shuyun Lin, Yisen Wang, Xiangming Zhu
This paper summarizes the common mechanism shared by twelve previous transferability-boosting methods in a unified view, i. e., these methods all reduce game-theoretic interactions between regional adversarial perturbations.
no code implementations • 30 May 2022 • Zhanpeng Zhou, Wen Shen, Huixin Chen, Ling Tang, Quanshi Zhang
In this paper, we prove the effects of the BN operation on the back-propagation of the first and second derivatives of the loss.
no code implementations • 30 May 2022 • Xu Cheng, Hao Zhang, Yue Xin, Wen Shen, Jie Ren, Quanshi Zhang
We also prove that adversarial training tends to strengthen the influence of unconfident input samples with large gradient norms in an exponential manner.
1 code implementation • 14 May 2022 • Jiexing Qi, Jingyao Tang, Ziwei He, Xiangpeng Wan, Yu Cheng, Chenghu Zhou, Xinbing Wang, Quanshi Zhang, Zhouhan Lin
Our model can incorporate almost all types of existing relations in the literature, and in addition, we propose introducing co-reference relations for the multi-turn scenario.
Ranked #1 on Dialogue State Tracking on CoSQL
1 code implementation • 4 May 2022 • Jie Ren, Mingjie Li, Meng Zhou, Shih-Han Chan, Quanshi Zhang
Based on the proposed metrics, we analyze two typical phenomena of the change of the transformation complexity during the training process, and explore the ceiling of a DNN's complexity.
no code implementations • 26 Mar 2022 • Sha Yuan, Hanyu Zhao, Shuai Zhao, Jiahong Leng, Yangxiao Liang, Xiaozhi Wang, Jifan Yu, Xin Lv, Zhou Shao, Jiaao He, Yankai Lin, Xu Han, Zhenghao Liu, Ning Ding, Yongming Rao, Yizhao Gao, Liang Zhang, Ming Ding, Cong Fang, Yisen Wang, Mingsheng Long, Jing Zhang, Yinpeng Dong, Tianyu Pang, Peng Cui, Lingxiao Huang, Zheng Liang, HuaWei Shen, HUI ZHANG, Quanshi Zhang, Qingxiu Dong, Zhixing Tan, Mingxuan Wang, Shuo Wang, Long Zhou, Haoran Li, Junwei Bao, Yingwei Pan, Weinan Zhang, Zhou Yu, Rui Yan, Chence Shi, Minghao Xu, Zuobai Zhang, Guoqiang Wang, Xiang Pan, Mengjie Li, Xiaoyu Chu, Zijun Yao, Fangwei Zhu, Shulin Cao, Weicheng Xue, Zixuan Ma, Zhengyan Zhang, Shengding Hu, Yujia Qin, Chaojun Xiao, Zheni Zeng, Ganqu Cui, Weize Chen, Weilin Zhao, Yuan YAO, Peng Li, Wenzhao Zheng, Wenliang Zhao, Ziyi Wang, Borui Zhang, Nanyi Fei, Anwen Hu, Zenan Ling, Haoyang Li, Boxi Cao, Xianpei Han, Weidong Zhan, Baobao Chang, Hao Sun, Jiawen Deng, Chujie Zheng, Juanzi Li, Lei Hou, Xigang Cao, Jidong Zhai, Zhiyuan Liu, Maosong Sun, Jiwen Lu, Zhiwu Lu, Qin Jin, Ruihua Song, Ji-Rong Wen, Zhouchen Lin, LiWei Wang, Hang Su, Jun Zhu, Zhifang Sui, Jiajun Zhang, Yang Liu, Xiaodong He, Minlie Huang, Jian Tang, Jie Tang
With the rapid development of deep learning, training Big Models (BMs) for multiple downstream tasks becomes a popular paradigm.
no code implementations • 2 Dec 2021 • Dongrui Liu, Shaobo Wang, Jie Ren, Kangrui Wang, Sheng Yin, Huiqi Deng, Quanshi Zhang
In this paper, we focus on a typical two-phase phenomenon in the learning of multi-layer perceptrons (MLPs), and we aim to explain the reason for the decrease of feature diversity in the first phase.
1 code implementation • NeurIPS 2021 • Jie Ren, Die Zhang, Yisen Wang, Lu Chen, Zhanpeng Zhou, Yiting Chen, Xu Cheng, Xin Wang, Meng Zhou, Jie Shi, Quanshi Zhang
This paper provides a unified view to explain different adversarial attacks and defense methods, i. e. the view of multi-order interactions between input variables of DNNs.
2 code implementations • ICLR 2022 • Huiqi Deng, Qihan Ren, Hao Zhang, Quanshi Zhang
This paper explores the bottleneck of feature representations of deep neural networks (DNNs), from the perspective of the complexity of interactions between input variables encoded in DNNs.
1 code implementation • CVPR 2023 • Jie Ren, Mingjie Li, Qirui Chen, Huiqi Deng, Quanshi Zhang
This paper aims to illustrate the concept-emerging phenomenon in a trained DNN.
no code implementations • NeurIPS 2021 • Mingjie Li, Shaobo Wang, Quanshi Zhang
This paper proposes a method to visualize the discrimination power of intermediate-layer visual patterns encoded by a DNN.
no code implementations • NeurIPS 2021 • Wen Shen, Qihan Ren, Dongrui Liu, Quanshi Zhang
In this paper, we evaluate the quality of knowledge representations encoded in deep neural networks (DNNs) for 3D point cloud processing.
1 code implementation • 5 Nov 2021 • Jie Ren, Die Zhang, Yisen Wang, Lu Chen, Zhanpeng Zhou, Yiting Chen, Xu Cheng, Xin Wang, Meng Zhou, Jie Shi, Quanshi Zhang
This paper provides a unified view to explain different adversarial attacks and defense methods, \emph{i. e.} the view of multi-order interactions between input variables of DNNs.
no code implementations • 29 Sep 2021 • Xu Cheng, Xin Wang, Haotian Xue, Zhengyang Liang, Xin Jin, Quanshi Zhang
This paper proposes a hypothesis to analyze the underlying reason for the cognitive difficulty of an image from two perspectives, i. e. a cognitive image usually makes a DNN strongly activated by cognitive concepts; discarding massive non-cognitive concepts may also help the DNN focus on cognitive concepts.
no code implementations • 29 Sep 2021 • Lu Chen, Renjie Chen, Hang Guo, Yuan Luo, Quanshi Zhang, Yisen Wang
Adversarial examples have attracted significant attention over the years, yet a sufficient understanding is in lack, especially when analyzing their performances in combination with adversarial training.
no code implementations • 29 Sep 2021 • Jie Ren, Zhanpeng Zhou, Qirui Chen, Quanshi Zhang
In the computation of Shapley values, people usually set an input variable to its baseline value to represent the absence of this variable.
1 code implementation • 22 Sep 2021 • Yuxiang Wu, Shang Wu, Xin Wang, Chengtian Lang, Quanshi Zhang, Quan Wen, Tianqi Xu
Second, 2-dimensional neuronal regions are fused into 3-dimensional neuron entities.
no code implementations • ICCV 2021 • Xin Wang, Shuyun Lin, Hao Zhang, Yufei Zhu, Quanshi Zhang
This paper aims to explain adversarial attacks in terms of how adversarial perturbations contribute to the attacking task.
no code implementations • 31 Jul 2021 • Xu Cheng, Xin Wang, Haotian Xue, Zhengyang Liang, Quanshi Zhang
This paper proposes a hypothesis for the aesthetic appreciation that aesthetic images make a neural network strengthen salient concepts and discard inessential concepts.
no code implementations • 16 Jul 2021 • Quanshi Zhang, Tian Han, Lixin Fan, Zhanxing Zhu, Hang Su, Ying Nian Wu, Jie Ren, Hao Zhang
This workshop pays a special interest in theoretic foundations, limitations, and new application trends in the scope of XAI.
1 code implementation • 9 Jul 2021 • Wen Shen, Zhihua Wei, Shikun Huang, BinBin Zhang, Jiaqi Fan, Ping Zhao, Quanshi Zhang
The reasonable definition of semantic interpretability presents the core challenge in explainable AI.
no code implementations • 21 Jun 2021 • Xu Cheng, Chuntung Chu, Yi Zheng, Jie Ren, Quanshi Zhang
In this paper, we rethink how a DNN encodes visual concepts of different complexities from a new perspective, i. e. the game-theoretic multi-order interactions between pixels in an image.
1 code implementation • 22 May 2021 • Jie Ren, Zhanpeng Zhou, Qirui Chen, Quanshi Zhang
Masking some input variables of a deep neural network (DNN) and computing output changes on the masked input sample represent a typical way to compute attributions of input variables in the sample.
1 code implementation • 12 Mar 2021 • Jie Ren, Die Zhang, Yisen Wang, Lu Chen, Zhanpeng Zhou, Yiting Chen, Xu Cheng, Xin Wang, Meng Zhou, Jie Shi, Quanshi Zhang
This paper provides a unified view to explain different adversarial attacks and defense methods, i. e. the view of multi-order interactions between input variables of DNNs.
no code implementations • ICLR 2021 • Xin Wang, Jie Ren, Shuyun Lin, Xiangming Zhu, Yisen Wang, Quanshi Zhang
We discover and prove the negative correlation between the adversarial transferability and the interaction inside adversarial perturbations.
no code implementations • ICLR 2021 • Hao Zhang, Sen Li, Yinchao Ma, Mingjie Li, Yichen Xie, Quanshi Zhang
Experimental results on various DNNs and datasets have shown that the interaction loss can effectively improve the utility of dropout and boost the performance of DNNs.
no code implementations • 1 Jan 2021 • Jie Ren, Mingjie Li, Zexu Liu, Quanshi Zhang
This paper aims to define, visualize, and analyze the feature complexity that is learned by a DNN.
no code implementations • 1 Jan 2021 • Jie Ren, Mingjie Li, Meng Zhou, Shih-Han Chan, Zexu Liu, Quanshi Zhang
Based on the proposed metrics, we analyze two typical phenomena of the change of the transformation complexity during the training process, and explore the ceiling of a DNN’s complexity.
no code implementations • 28 Oct 2020 • Hao Zhang, Xu Cheng, Yiting Chen, Quanshi Zhang
In this study, we define interaction components of different orders between two input variables based on game theory.
no code implementations • 10 Oct 2020 • Hao Zhang, Yichen Xie, Longjie Zheng, Die Zhang, Quanshi Zhang
In this paper, we define and quantify the significance of interactions among multiple input variables of the DNN.
1 code implementation • 8 Oct 2020 • Xin Wang, Jie Ren, Shuyun Lin, Xiangming Zhu, Yisen Wang, Quanshi Zhang
We discover and prove the negative correlation between the adversarial transferability and the interaction inside adversarial perturbations.
no code implementations • 24 Sep 2020 • Hao Zhang, Sen Li, Yinchao Ma, Mingjie Li, Yichen Xie, Quanshi Zhang
This paper aims to understand and improve the utility of the dropout operation from the perspective of game-theoretic interactions.
no code implementations • 11 Sep 2020 • Shufan Wang, Ningyi Liao, Liyao Xiang, Nanyang Ye, Quanshi Zhang
Through experiments on a variety of adversarial pruning methods, we find that weights sparsity will not hurt but improve robustness, where both weights inheritance from the lottery ticket and adversarial training improve model robustness in network pruning.
no code implementations • 29 Jun 2020 • Die Zhang, Huilin Zhou, Hao Zhang, Xiaoyi Bao, Da Huo, Ruizhao Chen, Xu Cheng, Mengyue Wu, Quanshi Zhang
This paper proposes a method to disentangle and quantify interactions among words that are encoded inside a DNN for natural language processing.
1 code implementation • 29 Jun 2020 • Jie Ren, Mingjie Li, Zexu Liu, Quanshi Zhang
This paper aims to define, quantify, and analyze the feature complexity that is learned by a DNN.
no code implementations • 21 Jun 2020 • Hao Zhang, Yiting Chen, Haotian Ma, Xu Cheng, Qihan Ren, Liyao Xiang, Jie Shi, Quanshi Zhang
Compared to the traditional neural network, the RENN uses d-ary vectors/tensors as features, in which each element is a d-ary number.
no code implementations • 18 Mar 2020 • Hao Zhang, Yi-Ting Chen, Liyao Xiang, Haotian Ma, Jie Shi, Quanshi Zhang
We propose a method to revise the neural network to construct the quaternion-valued neural network (QNN), in order to prevent intermediate-layer features from leaking input information.
no code implementations • CVPR 2020 • Xu Cheng, Zhefan Rao, Yilan Chen, Quanshi Zhang
Whereas, in the scenario of learning from raw data, the DNN learns visual concepts sequentially.
no code implementations • 18 Dec 2019 • Shuang Zhang, Liyao Xiang, CongCong Li, YiXuan Wang, Quanshi Zhang, Wei Wang, Bo Li
Powered by machine learning services in the cloud, numerous learning-driven mobile applications are gaining popularity in the market.
no code implementations • 20 Nov 2019 • Hao Zhang, Jiayi Chen, Haotian Xue, Quanshi Zhang
This paper proposes a set of criteria to evaluate the objectiveness of explanation methods of neural networks, which is crucial for the development of explainable AI, but it also presents significant challenges.
1 code implementation • CVPR 2021 • Wen Shen, Zhihua Wei, Shikun Huang, BinBin Zhang, Panyue Chen, Ping Zhao, Quanshi Zhang
In this paper, we diagnose deep neural networks for 3D point cloud processing to explore utilities of different intermediate-layer network architectures.
1 code implementation • ECCV 2020 • Wen Shen, BinBin Zhang, Shikun Huang, Zhihua Wei, Quanshi Zhang
This paper proposes a set of rules to revise various neural networks for 3D point cloud processing to rotation-equivariant quaternion neural networks (REQNNs).
no code implementations • ICLR 2020 • Ruofan Liang, Tianlin Li, Longfei Li, Jing Wang, Quanshi Zhang
As a generic tool, our method can be broadly used for different applications.
1 code implementation • 10 Jun 2019 • Haotian Ma, Hao Zhang, Fan Zhou, Yinqing Zhang, Quanshi Zhang
We define two types of entropy-based metrics, i. e. (1) the discarding of pixel-wise information used in the forward propagation, and (2) the uncertainty of the input reconstruction, to measure input information contained by a specific layer from two perspectives.
1 code implementation • ICLR 2020 • Liyao Xiang, Haotian Ma, Hao Zhang, Yifan Zhang, Jie Ren, Quanshi Zhang
Previous studies have found that an adversary attacker can often infer unintended input information from intermediate-layer features.
no code implementations • 25 Jan 2019 • Quanshi Zhang, Lixin Fan, Bolei Zhou
This is the Proceedings of AAAI 2019 Workshop on Network Interpretability for Deep Learning
no code implementations • 21 Jan 2019 • Quanshi Zhang, Yu Yang, Qian Yu, Ying Nian Wu
This paper focuses on a new task, i. e., transplanting a category-and-task-specific neural network to a generic, modular network without strong supervision.
no code implementations • 21 Jan 2019 • Quanshi Zhang, Yu Yang, Ying Nian Wu
This paper presents an unsupervised method to learn a neural network, namely an explainer, to interpret a pre-trained convolutional neural network (CNN), i. e., the explainer uses interpretable visual concepts to explain features in middle conv-layers of a CNN.
no code implementations • 8 Jan 2019 • Zenan Ling, Haotian Ma, Yu Yang, Robert C. Qiu, Song-Chun Zhu, Quanshi Zhang
In this paper, we propose to disentangle and interpret contextual effects that are encoded in a pre-trained deep neural network.
no code implementations • 8 Jan 2019 • Quanshi Zhang, Xin Wang, Ying Nian Wu, Huilin Zhou, Song-Chun Zhu
This paper proposes a generic method to learn interpretable convolutional filters in a deep convolutional neural network (CNN) for object classification, where each interpretable filter encodes features of a specific object part.
no code implementations • 18 Dec 2018 • Quanshi Zhang, Xin Wang, Ruiming Cao, Ying Nian Wu, Feng Shi, Song-Chun Zhu
This paper introduces a graphical model, namely an explanatory graph, which reveals the knowledge hierarchy hidden inside conv-layers of a pre-trained CNN.
no code implementations • ICCV 2019 • Runjin Chen, Hao Chen, Ge Huang, Jie Ren, Quanshi Zhang
This paper presents a method to explain the knowledge encoded in a convolutional neural network (CNN) quantitatively and semantically.
no code implementations • 18 Dec 2018 • Quanshi Zhang, Ruiming Cao, Ying Nian Wu, Song-Chun Zhu
The AOG associates each object part with certain neural units in feature maps of conv-layers.
no code implementations • 18 May 2018 • Quanshi Zhang, Yu Yang, Yuchen Liu, Ying Nian Wu, Song-Chun Zhu
Given feature maps of a certain conv-layer of the CNN, the explainer performs like an auto-encoder, which first disentangles the feature maps into object-part features and then inverts object-part features back to features of higher conv-layers of the CNN.
no code implementations • 26 Apr 2018 • Quanshi Zhang, Yu Yang, Qian Yu, Ying Nian Wu
This paper focuses on a new task, i. e., transplanting a category-and-task-specific neural network to a generic, modular network without strong supervision.
1 code implementation • 2 Feb 2018 • Quanshi Zhang, Song-Chun Zhu
This paper reviews recent studies in understanding neural-network representations and learning neural networks with interpretable/disentangled middle-layer representations.
no code implementations • CVPR 2019 • Quanshi Zhang, Yu Yang, Haotian Ma, Ying Nian Wu
We propose to learn a decision tree, which clarifies the specific reason for each prediction made by the CNN at the semantic level.
no code implementations • 29 Oct 2017 • Quanshi Zhang, Wenguan Wang, Song-Chun Zhu
We aim to discover representation flaws caused by potential dataset bias.
2 code implementations • CVPR 2018 • Quanshi Zhang, Ying Nian Wu, Song-Chun Zhu
Instead, the interpretable CNN automatically assigns each filter in a high conv-layer with an object part during the learning process.
Ranked #1 on single catogory classification on ILSVRC Part
no code implementations • 13 Aug 2017 • Quanshi Zhang, Ying Nian Wu, Hao Zhang, Song-Chun Zhu
The loss is defined for nodes in all layers of the AOG, including the generative loss (measuring the likelihood of the images) and the discriminative loss (measuring the fitness to human answers).
no code implementations • 13 Aug 2017 • Quanshi Zhang, Xuan Song, Ryosuke Shibasaki
In this study, we formulate the concept of "mining maximal-size frequent subgraphs" in the challenging domain of visual data (images and videos).
no code implementations • 5 Aug 2017 • Quanshi Zhang, Ruiming Cao, Feng Shi, Ying Nian Wu, Song-Chun Zhu
Considering that each filter in a conv-layer of a pre-trained CNN usually represents a mixture of object parts, we propose a simple yet efficient method to automatically disentangles different part patterns from each filter, and construct an explanatory graph.
no code implementations • 5 Aug 2017 • Quanshi Zhang, Ruiming Cao, Shengming Zhang, Mark Redmonds, Ying Nian Wu, Song-Chun Zhu
In the scenario of one/multi-shot learning, conventional end-to-end learning strategies without sufficient supervision are usually not powerful enough to learn correct patterns from noisy signals.
no code implementations • CVPR 2017 • Quanshi Zhang, Ruiming Cao, Ying Nian Wu, Song-Chun Zhu
We use an active human-computer communication to incrementally grow such an AOG on the pre-trained CNN as follows.
no code implementations • 14 Nov 2016 • Quanshi Zhang, Ruiming Cao, Ying Nian Wu, Song-Chun Zhu
This paper proposes a learning strategy that extracts object-part concepts from a pre-trained convolutional neural network (CNN), in an attempt to 1) explore explicit semantics hidden in CNN units and 2) gradually grow a semantically interpretable graphical model on the pre-trained CNN for hierarchical object understanding.
no code implementations • ICCV 2015 • Quanshi Zhang, Ying Nian Wu, Song-Chun Zhu
This paper reformulates the theory of graph mining on the technical basis of graph matching, and extends its scope of applications to computer vision.
no code implementations • CVPR 2014 • Quanshi Zhang, Xuan Song, Xiaowei Shao, Huijing Zhao, Ryosuke Shibasaki
3D reconstruction from a single image is a classical problem in computer vision.
no code implementations • CVPR 2014 • Quanshi Zhang, Xuan Song, Xiaowei Shao, Huijing Zhao, Ryosuke Shibasaki
Graph matching and graph mining are two typical areas in artificial intelligence.
no code implementations • CVPR 2013 • Quanshi Zhang, Xuan Song, Xiaowei Shao, Ryosuke Shibasaki, Huijing Zhao
We design a graphical model that uses object edges to represent object structures, and this paper aims to incrementally learn this category model from one labeled object and a number of casually captured scenes.