Search Results for author: Mingbao Lin

Found 55 papers, 46 papers with code

Boosting the Cross-Architecture Generalization of Dataset Distillation through an Empirical Study

1 code implementation9 Dec 2023 Lirui Zhao, Yuxin Zhang, Mingbao Lin, Fei Chao, Rongrong Ji

The poor cross-architecture generalization of dataset distillation greatly weakens its practical significance.

Inductive Bias

I&S-ViT: An Inclusive & Stable Method for Pushing the Limit of Post-Training ViTs Quantization

1 code implementation16 Nov 2023 Yunshan Zhong, Jiawei Hu, Mingbao Lin, Mengzhao Chen, Rongrong Ji

Albeit the scalable performance of vision transformers (ViTs), the dense computational costs (training & inference) undermine their position in industrial applications.

Quantization

Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs

1 code implementation13 Oct 2023 Yuxin Zhang, Lirui Zhao, Mingbao Lin, Yunyun Sun, Yiwu Yao, Xingjia Han, Jared Tanner, Shiwei Liu, Rongrong Ji

Inspired by the Dynamic Sparse Training, DSnoT minimizes the reconstruction error between the dense and sparse LLMs, in the fashion of performing iterative weight pruning-and-growing on top of sparse LLMs.

Network Pruning

MemoChat: Tuning LLMs to Use Memos for Consistent Long-Range Open-Domain Conversation

1 code implementation16 Aug 2023 Junru Lu, Siyu An, Mingbao Lin, Gabriele Pergola, Yulan He, Di Yin, Xing Sun, Yunsheng Wu

We propose MemoChat, a pipeline for refining instructions that enables large language models (LLMs) to effectively employ self-composed memos for maintaining consistent long-range open-domain conversations.

Memorization Retrieval

Spatial Re-parameterization for N:M Sparsity

no code implementations9 Jun 2023 Yuxin Zhang, Mingbao Lin, Yunshan Zhong, Mengzhao Chen, Fei Chao, Rongrong Ji

This paper presents a Spatial Re-parameterization (SpRe) method for the N:M sparsity in CNNs.

DiffRate : Differentiable Compression Rate for Efficient Vision Transformers

1 code implementation ICCV 2023 Mengzhao Chen, Wenqi Shao, Peng Xu, Mingbao Lin, Kaipeng Zhang, Fei Chao, Rongrong Ji, Yu Qiao, Ping Luo

Token compression aims to speed up large-scale vision transformers (e. g. ViTs) by pruning (dropping) or merging tokens.

Bi-ViT: Pushing the Limit of Vision Transformer Quantization

no code implementations21 May 2023 Yanjing Li, Sheng Xu, Mingbao Lin, Xianbin Cao, Chuanjian Liu, Xiao Sun, Baochang Zhang

Vision transformers (ViTs) quantization offers a promising prospect to facilitate deploying large pre-trained networks on resource-limited devices.

Binarization Quantization

MultiQuant: A Novel Multi-Branch Topology Method for Arbitrary Bit-width Network Quantization

1 code implementation14 May 2023 Yunshan Zhong, Mingbao Lin, Yuyao Zhou, Mengzhao Chen, Yuxin Zhang, Fei Chao, Rongrong Ji

However, in this paper, we investigate existing methods and observe a significant accumulation of quantization errors caused by frequent bit-width switching of weights and activations, leading to limited performance.

Quantization

Distribution-Flexible Subset Quantization for Post-Quantizing Super-Resolution Networks

1 code implementation10 May 2023 Yunshan Zhong, Mingbao Lin, Jingjing Xie, Yuxin Zhang, Fei Chao, Rongrong Ji

Compared to the common iterative exhaustive search algorithm, our strategy avoids the enumeration of all possible combinations in the universal set, reducing the time complexity from exponential to linear.

Quantization Super-Resolution

Q-DETR: An Efficient Low-Bit Quantized Detection Transformer

1 code implementation CVPR 2023 Sheng Xu, Yanjing Li, Mingbao Lin, Peng Gao, Guodong Guo, Jinhu Lu, Baochang Zhang

At the upper level, we introduce a new foreground-aware query matching scheme to effectively transfer the teacher information to distillation-desired features to minimize the conditional information entropy.

object-detection Object Detection +1

Bi-directional Masks for Efficient N:M Sparse Training

1 code implementation13 Feb 2023 Yuxin Zhang, Yiting Luo, Mingbao Lin, Yunshan Zhong, Jingjing Xie, Fei Chao, Rongrong Ji

We focus on addressing the dense backward propagation issue for training efficiency of N:M fine-grained sparsity that preserves at most N out of M consecutive weights and achieves practical speedups supported by the N:M sparse tensor core.

Real-Time Image Demoireing on Mobile Devices

1 code implementation4 Feb 2023 Yuxin Zhang, Mingbao Lin, Xunchao Li, Han Liu, Guozhi Wang, Fei Chao, Shuai Ren, Yafei Wen, Xiaoxin Chen, Rongrong Ji

In this paper, we launch the first study on accelerating demoireing networks and propose a dynamic demoireing acceleration method (DDA) towards a real-time deployment on mobile devices.

Resilient Binary Neural Network

1 code implementation2 Feb 2023 Sheng Xu, Yanjing Li, Teli Ma, Mingbao Lin, Hao Dong, Baochang Zhang, Peng Gao, Jinhu Lv

In this paper, we introduce a Resilient Binary Neural Network (ReBNN) to mitigate the frequent oscillation for better BNNs' training.

Low-Rank Winograd Transformation for 3D Convolutional Neural Networks

no code implementations26 Jan 2023 Ziran Qin, Mingbao Lin, Weiyao Lin

This paper focuses on Winograd transformation in 3D convolutional neural networks (CNNs) that are more over-parameterized compared with the 2D version.

Discriminator-Cooperated Feature Map Distillation for GAN Compression

1 code implementation CVPR 2023 Tie Hu, Mingbao Lin, Lizhou You, Fei Chao, Rongrong Ji

In contrast to conventional pixel-to-pixel match methods in feature map distillation, our DCD utilizes teacher discriminator as a transformation to drive intermediate results of the student generator to be perceptually close to corresponding outputs of the teacher generator.

Image Generation Knowledge Distillation

SMMix: Self-Motivated Image Mixing for Vision Transformers

1 code implementation ICCV 2023 Mengzhao Chen, Mingbao Lin, Zhihang Lin, Yuxin Zhang, Fei Chao, Rongrong Ji

Due to the subtle designs of the self-motivated paradigm, our SMMix is significant in its smaller training overhead and better performance than other CutMix variants.

Exploring Content Relationships for Distilling Efficient GANs

1 code implementation21 Dec 2022 Lizhou You, Mingbao Lin, Tie Hu, Fei Chao, Rongrong Ji

This paper proposes a content relationship distillation (CRD) to tackle the over-parameterized generative adversarial networks (GANs) for the serviceability in cutting-edge devices.

LAB-Net: LAB Color-Space Oriented Lightweight Network for Shadow Removal

1 code implementation27 Aug 2022 Hong Yang, Gongrui Nan, Mingbao Lin, Fei Chao, Yunhang Shen, Ke Li, Rongrong Ji

Finally, the LSA modules are further developed to fully use the prior information in non-shadow regions to cleanse the shadow regions.

Shadow Removal

Knowledge Condensation Distillation

2 code implementations12 Jul 2022 Chenxin Li, Mingbao Lin, Zhiyuan Ding, Nie Lin, Yihong Zhuang, Yue Huang, Xinghao Ding, Liujuan Cao

Knowledge Distillation (KD) transfers the knowledge from a high-capacity teacher network to strengthen a smaller student.

Knowledge Distillation

Learning Best Combination for Efficient N:M Sparsity

1 code implementation14 Jun 2022 Yuxin Zhang, Mingbao Lin, Zhihang Lin, Yiting Luo, Ke Li, Fei Chao, Yongjian Wu, Rongrong Ji

In this paper, we show that the N:M learning can be naturally characterized as a combinatorial problem which searches for the best combination candidate within a finite collection.

Super Vision Transformer

1 code implementation23 May 2022 Mingbao Lin, Mengzhao Chen, Yuxin Zhang, Chunhua Shen, Rongrong Ji, Liujuan Cao

Experimental results on ImageNet demonstrate that our SuperViT can considerably reduce the computational costs of ViT models with even performance increase.

Shadow-Aware Dynamic Convolution for Shadow Removal

2 code implementations10 May 2022 Yimin Xu, Mingbao Lin, Hong Yang, Fei Chao, Rongrong Ji

Inspired by the fact that the color mapping of the non-shadow region is easier to learn, our SADC processes the non-shadow region with a lightweight convolution module in a computationally cheap manner and recovers the shadow region with a more complicated convolution module to ensure the quality of image reconstruction.

Image Reconstruction Shadow Removal

End-to-End Zero-Shot HOI Detection via Vision and Language Knowledge Distillation

1 code implementation1 Apr 2022 Mingrui Wu, Jiaxin Gu, Yunhang Shen, Mingbao Lin, Chao Chen, Xiaoshuai Sun

Extensive experiments on HICO-Det dataset demonstrate that our model discovers potential interactive pairs and enables the recognition of unseen HOIs.

Human-Object Interaction Detection Knowledge Distillation +4

SeqTR: A Simple yet Universal Network for Visual Grounding

3 code implementations30 Mar 2022 Chaoyang Zhu, Yiyi Zhou, Yunhang Shen, Gen Luo, Xingjia Pan, Mingbao Lin, Chao Chen, Liujuan Cao, Xiaoshuai Sun, Rongrong Ji

In this paper, we propose a simple yet universal network termed SeqTR for visual grounding tasks, e. g., phrase localization, referring expression comprehension (REC) and segmentation (RES).

Referring Expression Referring Expression Comprehension +1

ARM: Any-Time Super-Resolution Method

1 code implementation21 Mar 2022 Bohong Chen, Mingbao Lin, Kekai Sheng, Mengdan Zhang, Peixian Chen, Ke Li, Liujuan Cao, Rongrong Ji

To that effect, we construct an Edge-to-PSNR lookup table that maps the edge score of an image patch to the PSNR performance for each subnet, together with a set of computation costs for the subnets.

Image Super-Resolution

CF-ViT: A General Coarse-to-Fine Method for Vision Transformer

1 code implementation8 Mar 2022 Mengzhao Chen, Mingbao Lin, Ke Li, Yunhang Shen, Yongjian Wu, Fei Chao, Rongrong Ji

Our proposed CF-ViT is motivated by two important observations in modern ViT models: (1) The coarse-grained patch splitting can locate informative regions of an input image.

Dynamic Dual Trainable Bounds for Ultra-low Precision Super-Resolution Networks

1 code implementation8 Mar 2022 Yunshan Zhong, Mingbao Lin, Xunchao Li, Ke Li, Yunhang Shen, Fei Chao, Yongjian Wu, Rongrong Ji

However, these methods suffer from severe performance degradation when quantizing the SR models to ultra-low precision (e. g., 2-bit and 3-bit) with the low-cost layer-wise quantizer.

Quantization Super-Resolution

Pruning Networks with Cross-Layer Ranking & k-Reciprocal Nearest Filters

1 code implementation15 Feb 2022 Mingbao Lin, Liujuan Cao, Yuxin Zhang, Ling Shao, Chia-Wen Lin, Rongrong Ji

Then, we introduce a recommendation-based filter selection scheme where each filter recommends a group of its closest filters.

Image Classification Network Pruning

OptG: Optimizing Gradient-driven Criteria in Network Sparsity

1 code implementation30 Jan 2022 Yuxin Zhang, Mingbao Lin, Mengzhao Chen, Fei Chao, Rongrong Ji

We prove that supermask training is to accumulate the criteria of gradient-driven sparsity for both removed and preserved weights, and it can partly solve the independence paradox.

IntraQ: Learning Synthetic Images with Intra-Class Heterogeneity for Zero-Shot Network Quantization

1 code implementation CVPR 2022 Yunshan Zhong, Mingbao Lin, Gongrui Nan, Jianzhuang Liu, Baochang Zhang, Yonghong Tian, Rongrong Ji

In this paper, we observe an interesting phenomenon of intra-class heterogeneity in real data and show that existing methods fail to retain this property in their synthetic images, which causes a limited performance increase.

Quantization

Prioritized Subnet Sampling for Resource-Adaptive Supernet Training

1 code implementation12 Sep 2021 Bohong Chen, Mingbao Lin, Rongrong Ji, Liujuan Cao

At the end of training, our PSS-Net retains the best subnet in each pool to entitle a fast switch of high-quality subnets for inference when the available resources vary.

Fine-grained Data Distribution Alignment for Post-Training Quantization

1 code implementation9 Sep 2021 Yunshan Zhong, Mingbao Lin, Mengzhao Chen, Ke Li, Yunhang Shen, Fei Chao, Yongjian Wu, Rongrong Ji

While post-training quantization receives popularity mostly due to its evasion in accessing the original complete training dataset, its poor performance also stems from scarce images.

Quantization

Training Compact CNNs for Image Classification using Dynamic-coded Filter Fusion

1 code implementation14 Jul 2021 Mingbao Lin, Bohong Chen, Fei Chao, Rongrong Ji

Each filter in our DCFF is firstly given an inter-similarity distribution with a temperature parameter as a filter proxy, on top of which, a fresh Kullback-Leibler divergence based dynamic-coded criterion is proposed to evaluate the filter importance.

Image Classification

1xN Pattern for Pruning Convolutional Neural Networks

1 code implementation31 May 2021 Mingbao Lin, Yuxin Zhang, Yuchao Li, Bohong Chen, Fei Chao, Mengdi Wang, Shen Li, Yonghong Tian, Rongrong Ji

We also provide a workflow of filter rearrangement that first rearranges the weight matrix in the output channel dimension to derive more influential blocks for accuracy improvements and then applies similar rearrangement to the next-layer weights in the input channel dimension to ensure correct convolutional operations.

Network Pruning

Carrying out CNN Channel Pruning in a White Box

1 code implementation24 Apr 2021 Yuxin Zhang, Mingbao Lin, Chia-Wen Lin, Jie Chen, Feiyue Huang, Yongjian Wu, Yonghong Tian, Rongrong Ji

Specifically, to model the contribution of each channel to differentiating categories, we develop a class-wise mask for each channel, implemented in a dynamic training manner w. r. t.

Image Classification

Lottery Jackpots Exist in Pre-trained Models

2 code implementations18 Apr 2021 Yuxin Zhang, Mingbao Lin, Yunshan Zhong, Fei Chao, Rongrong Ji

Existing studies achieve the sparsity of neural networks via time-consuming weight training or complex searching on networks with expanded width, which greatly limits the applications of network pruning.

Network Pruning

Learnable Expansion-and-Compression Network for Few-shot Class-Incremental Learning

no code implementations6 Apr 2021 Boyu Yang, Mingbao Lin, Binghao Liu, Mengying Fu, Chang Liu, Rongrong Ji, Qixiang Ye

By tentatively expanding network nodes, LEC-Net enlarges the representation capacity of features, alleviating feature drift of old network from the perspective of model regularization.

Few-Shot Class-Incremental Learning Incremental Learning

Distilling a Powerful Student Model via Online Knowledge Distillation

1 code implementation26 Mar 2021 Shaojie Li, Mingbao Lin, Yan Wang, Yongjian Wu, Yonghong Tian, Ling Shao, Rongrong Ji

Besides, a self-distillation module is adopted to convert the feature map of deeper layers into a shallower one.

Knowledge Distillation

SiMaN: Sign-to-Magnitude Network Binarization

2 code implementations16 Feb 2021 Mingbao Lin, Rongrong Ji, Zihan Xu, Baochang Zhang, Fei Chao, Chia-Wen Lin, Ling Shao

In this paper, we show that our weight binarization provides an analytical solution by encoding high-magnitude weights into +1s, and 0s otherwise.

Binarization

Network Pruning using Adaptive Exemplar Filters

1 code implementation20 Jan 2021 Mingbao Lin, Rongrong Ji, Shaojie Li, Yan Wang, Yongjian Wu, Feiyue Huang, Qixiang Ye

Inspired by the face recognition community, we use a message passing algorithm Affinity Propagation on the weight matrices to obtain an adaptive number of exemplars, which then act as the preserved filters.

Face Recognition Network Pruning

Fast Class-wise Updating for Online Hashing

no code implementations1 Dec 2020 Mingbao Lin, Rongrong Ji, Xiaoshuai Sun, Baochang Zhang, Feiyue Huang, Yonghong Tian, DaCheng Tao

To achieve fast online adaptivity, a class-wise updating method is developed to decompose the binary code learning and alternatively renew the hash functions in a class-wise fashion, which well addresses the burden on large amounts of training batches.

Learning Efficient GANs for Image Translation via Differentiable Masks and co-Attention Distillation

1 code implementation17 Nov 2020 Shaojie Li, Mingbao Lin, Yan Wang, Fei Chao, Ling Shao, Rongrong Ji

The latter simultaneously distills informative attention maps from both the generator and discriminator of a pre-trained model to the searched generator, effectively stabilizing the adversarial training of our light-weight model.

Translation

Rotated Binary Neural Network

2 code implementations NeurIPS 2020 Mingbao Lin, Rongrong Ji, Zihan Xu, Baochang Zhang, Yan Wang, Yongjian Wu, Feiyue Huang, Chia-Wen Lin

In this paper, for the first time, we explore the influence of angular bias on the quantization error and then introduce a Rotated Binary Neural Network (RBNN), which considers the angle alignment between the full-precision weight vector and its binarized version.

Binarization Quantization

HRank: Filter Pruning using High-Rank Feature Map

2 code implementations CVPR 2020 Mingbao Lin, Rongrong Ji, Yan Wang, Yichen Zhang, Baochang Zhang, Yonghong Tian, Ling Shao

The principle behind our pruning is that low-rank feature maps contain less information, and thus pruned results can be easily reproduced.

Network Pruning Vocal Bursts Intensity Prediction

Filter Sketch for Network Pruning

1 code implementation23 Jan 2020 Mingbao Lin, Liujuan Cao, Shaojie Li, Qixiang Ye, Yonghong Tian, Jianzhuang Liu, Qi Tian, Rongrong Ji

Our approach, referred to as FilterSketch, encodes the second-order information of pre-trained weights, which enables the representation capacity of pruned networks to be recovered with a simple fine-tuning procedure.

Network Pruning

Channel Pruning via Automatic Structure Search

1 code implementation23 Jan 2020 Mingbao Lin, Rongrong Ji, Yuxin Zhang, Baochang Zhang, Yongjian Wu, Yonghong Tian

In this paper, we propose a new channel pruning method based on artificial bee colony algorithm (ABC), dubbed as ABCPruner, which aims to efficiently find optimal pruned structure, i. e., channel number in each layer, rather than selecting "important" channels as previous works did.

Hadamard Codebook Based Deep Hashing

no code implementations21 Oct 2019 Shen Chen, Liujuan Cao, Mingbao Lin, Yan Wang, Xiaoshuai Sun, Chenglin Wu, Jingfei Qiu, Rongrong Ji

Specifically, we utilize an off-the-shelf algorithm to generate a binary Hadamard codebook to satisfy the requirement of bit independence and bit balance, which subsequently serves as the desired outputs of the hash functions learning.

Deep Hashing Image Retrieval

Supervised Online Hashing via Similarity Distribution Learning

no code implementations31 May 2019 Mingbao Lin, Rongrong Ji, Shen Chen, Feng Zheng, Xiaoshuai Sun, Baochang Zhang, Liujuan Cao, Guodong Guo, Feiyue Huang

In this paper, we propose to model the similarity distributions between the input data and the hashing codes, upon which a novel supervised online hashing method, dubbed as Similarity Distribution based Online Hashing (SDOH), is proposed, to keep the intrinsic semantic relationship in the produced Hamming space.

Retrieval

Hadamard Matrix Guided Online Hashing

1 code implementation11 May 2019 Mingbao Lin, Rongrong Ji, Hong Liu, Xiaoshuai Sun, Shen Chen, Qi Tian

We then treat the learning of hash functions as a set of binary classification problems to fit the assigned target code.

Binary Classification

Supervised Online Hashing via Hadamard Codebook Learning

1 code implementation28 Apr 2019 Mingbao Lin, Rongrong Ji, Hong Liu, Yongjian Liu

Notably, the proposed HCOH can be embedded with supervised labels and it not limited to a predefined category number.

Retrieval Semantic Similarity +1

Towards Optimal Discrete Online Hashing with Balanced Similarity

1 code implementation29 Jan 2019 Mingbao Lin, Rongrong Ji, Hong Liu, Xiaoshuai Sun, Yongjian Wu, Yunsheng Wu

In this paper, we propose a novel supervised online hashing method, termed Balanced Similarity for Online Discrete Hashing (BSODH), to solve the above problems in a unified framework.

Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.