Search Results for author: Mengqi Xue

Found 14 papers, 10 papers with code

Generalization Matters: Loss Minima Flattening via Parameter Hybridization for Efficient Online Knowledge Distillation

1 code implementation CVPR 2023 Tianli Zhang, Mengqi Xue, Jiangtao Zhang, Haofei Zhang, Yu Wang, Lechao Cheng, Jie Song, Mingli Song

Most existing online knowledge distillation(OKD) techniques typically require sophisticated modules to produce diverse knowledge for improving students' generalization ability.

Knowledge Distillation

Schema Inference for Interpretable Image Classification

1 code implementation12 Mar 2023 Haofei Zhang, Mengqi Xue, Xiaokang Liu, KaiXuan Chen, Jie Song, Mingli Song

In this paper, we study a novel inference paradigm, termed as schema inference, that learns to deductively infer the explainable predictions by rebuilding the prior deep neural network (DNN) forwarding scheme, guided by the prevalent philosophical cognitive concept of schema.

Classification Graph Matching +1

Jointly Complementary&Competitive Influence Maximization with Concurrent Ally-Boosting and Rival-Preventing

no code implementations19 Feb 2023 Qihao Shi, Wenjie Tian, Wujian Yang, Mengqi Xue, Can Wang, Minghui Wu

In this paper, we propose a new influence spread model, namely, Complementary\&Competitive Independent Cascade (C$^2$IC) model.

Blocking

Evaluation and Improvement of Interpretability for Self-Explainable Part-Prototype Networks

1 code implementation ICCV 2023 Qihan Huang, Mengqi Xue, Wenqi Huang, Haofei Zhang, Jie Song, Yongcheng Jing, Mingli Song

Part-prototype networks (e. g., ProtoPNet, ProtoTree, and ProtoPool) have attracted broad research interest for their intrinsic interpretability and comparable accuracy to non-interpretable counterparts.

A Survey of Neural Trees

1 code implementation7 Sep 2022 Haoling Li, Jie Song, Mengqi Xue, Haofei Zhang, Jingwen Ye, Lechao Cheng, Mingli Song

This survey aims to present a comprehensive review of NTs and attempts to identify how they enhance the model interpretability.

ProtoPFormer: Concentrating on Prototypical Parts in Vision Transformers for Interpretable Image Recognition

1 code implementation22 Aug 2022 Mengqi Xue, Qihan Huang, Haofei Zhang, Lechao Cheng, Jie Song, Minghui Wu, Mingli Song

The global prototypes are adopted to provide the global view of objects to guide local prototypes to concentrate on the foreground while eliminating the influence of the background.

Decision Making Explainable artificial intelligence +1

Meta-attention for ViT-backed Continual Learning

1 code implementation CVPR 2022 Mengqi Xue, Haofei Zhang, Jie Song, Mingli Song

Continual learning is a longstanding research topic due to its crucial role in tackling continually arriving tasks.

Continual Learning

Knowledge Amalgamation for Object Detection with Transformers

1 code implementation7 Mar 2022 Haofei Zhang, Feng Mao, Mengqi Xue, Gongfan Fang, Zunlei Feng, Jie Song, Mingli Song

Moreover, the transformer-based students excel in learning amalgamated knowledge, as they have mastered heterogeneous detection tasks rapidly and achieved superior or at least comparable performance to those of the teachers in their specializations.

Object object-detection +1

Bootstrapping ViTs: Towards Liberating Vision Transformers from Pre-training

1 code implementation CVPR 2022 Haofei Zhang, Jiarui Duan, Mengqi Xue, Jie Song, Li Sun, Mingli Song

Recently, vision Transformers (ViTs) are developing rapidly and starting to challenge the domination of convolutional neural networks (CNNs) in the realm of computer vision (CV).

A Survey of Deep Learning for Low-Shot Object Detection

no code implementations6 Dec 2021 Qihan Huang, Haofei Zhang, Mengqi Xue, Jie Song, Mingli Song

Although few-shot learning and zero-shot learning have been extensively explored in the field of image classification, it is indispensable to design new methods for object detection in the data-scarce scenario since object detection has an additional challenging localization task.

Few-Shot Learning Few-Shot Object Detection +6

Tree-Like Decision Distillation

no code implementations CVPR 2021 Jie Song, Haofei Zhang, Xinchao Wang, Mengqi Xue, Ying Chen, Li Sun, DaCheng Tao, Mingli Song

Knowledge distillation pursues a diminutive yet well-behaved student network by harnessing the knowledge learned by a cumbersome teacher model.

Decision Making Knowledge Distillation

KDExplainer: A Task-oriented Attention Model for Explaining Knowledge Distillation

1 code implementation10 May 2021 Mengqi Xue, Jie Song, Xinchao Wang, Ying Chen, Xingen Wang, Mingli Song

Knowledge distillation (KD) has recently emerged as an efficacious scheme for learning compact deep neural networks (DNNs).

Knowledge Distillation Multi-class Classification

Stability of Multi-Dimensional Switched Systems with an Application to Open Multi-Agent Systems

no code implementations2 Jan 2020 Mengqi Xue, Yang Tang, Wei Ren, Feng Qian

It shows that through a proper transformation, the seeking of the (practical) consensus performance of the open MAS with disconnected digraphs boils down to that of the (practical) stability property of an $M^3D$ system with unstable subsystems.

Customizing Student Networks From Heterogeneous Teachers via Adaptive Knowledge Amalgamation

2 code implementations ICCV 2019 Chengchao Shen, Mengqi Xue, Xinchao Wang, Jie Song, Li Sun, Mingli Song

To this end, we introduce a dual-step strategy that first extracts the task-specific knowledge from the heterogeneous teachers sharing the same sub-task, and then amalgamates the extracted knowledge to build the student network.

Cannot find the paper you are looking for? You can Submit a new open access paper.