Search Results for author: Fei Chao

Found 45 papers, 39 papers with code

Rethinking 3D Dense Caption and Visual Grounding in A Unified Framework through Prompt-based Localization

no code implementations17 Apr 2024 Yongdong Luo, Haojia Lin, Xiawu Zheng, Yigeng Jiang, Fei Chao, Jie Hu, Guannan Jiang, Songan Zhang, Rongrong Ji

3D Visual Grounding (3DVG) and 3D Dense Captioning (3DDC) are two crucial tasks in various 3D applications, which require both shared and complementary information in localization and visual-language relationships.

3D dense captioning Dense Captioning +1

AffineQuant: Affine Transformation Quantization for Large Language Models

1 code implementation19 Mar 2024 Yuexiao Ma, Huixia Li, Xiawu Zheng, Feng Ling, Xuefeng Xiao, Rui Wang, Shilei Wen, Fei Chao, Rongrong Ji

Among these techniques, Post-Training Quantization (PTQ) has emerged as a subject of considerable interest due to its noteworthy compression efficiency and cost-effectiveness in the context of training.

Quantization

EBFT: Effective and Block-Wise Fine-Tuning for Sparse LLMs

1 code implementation19 Feb 2024 Song Guo, Fan Wu, Lei Zhang, Xiawu Zheng, Shengchuan Zhang, Fei Chao, Yiyu Shi, Rongrong Ji

For instance, on the Wikitext2 dataset with LlamaV1-7B at 70% sparsity, our proposed EBFT achieves a perplexity of 16. 88, surpassing the state-of-the-art DSnoT with a perplexity of 75. 14.

Unified-Width Adaptive Dynamic Network for All-In-One Image Restoration

1 code implementation24 Jan 2024 Yimin Xu, Nanxi Gao, Zhongyun Shan, Fei Chao, Rongrong Ji

In contrast to traditional image restoration methods, all-in-one image restoration techniques are gaining increased attention for their ability to restore images affected by diverse and unknown corruption types and levels.

Computational Efficiency Image Restoration

Learning Image Demoireing from Unpaired Real Data

1 code implementation5 Jan 2024 Yunshan Zhong, Yuyao Zhou, Yuxin Zhang, Fei Chao, Rongrong Ji

The proposed method, referred to as Unpaired Demoireing (UnDeM), synthesizes pseudo moire images from unpaired datasets, generating pairs with clean images for training demoireing models.

Boosting the Cross-Architecture Generalization of Dataset Distillation through an Empirical Study

1 code implementation9 Dec 2023 Lirui Zhao, Yuxin Zhang, Mingbao Lin, Fei Chao, Rongrong Ji

The poor cross-architecture generalization of dataset distillation greatly weakens its practical significance.

Inductive Bias

AutoDiffusion: Training-Free Optimization of Time Steps and Architectures for Automated Diffusion Model Acceleration

1 code implementation ICCV 2023 Lijiang Li, Huixia Li, Xiawu Zheng, Jie Wu, Xuefeng Xiao, Rui Wang, Min Zheng, Xin Pan, Fei Chao, Rongrong Ji

Therefore, we propose to search the optimal time steps sequence and compressed model architecture in a unified framework to achieve effective image generation for diffusion models without any further training.

Image Generation single-image-generation

A Unified Framework for 3D Point Cloud Visual Grounding

1 code implementation23 Aug 2023 Haojia Lin, Yongdong Luo, Xiawu Zheng, Lijiang Li, Fei Chao, Taisong Jin, Donghao Luo, Yan Wang, Liujuan Cao, Rongrong Ji

This elaborate design enables 3DRefTR to achieve both well-performing 3DRES and 3DREC capacities with only a 6% additional latency compared to the original 3DREC model.

Referring Expression Referring Expression Comprehension +1

Spatial Re-parameterization for N:M Sparsity

no code implementations9 Jun 2023 Yuxin Zhang, Mingbao Lin, Yunshan Zhong, Mengzhao Chen, Fei Chao, Rongrong Ji

This paper presents a Spatial Re-parameterization (SpRe) method for the N:M sparsity in CNNs.

MultiQuant: A Novel Multi-Branch Topology Method for Arbitrary Bit-width Network Quantization

1 code implementation14 May 2023 Yunshan Zhong, Mingbao Lin, Yuyao Zhou, Mengzhao Chen, Yuxin Zhang, Fei Chao, Rongrong Ji

However, in this paper, we investigate existing methods and observe a significant accumulation of quantization errors caused by frequent bit-width switching of weights and activations, leading to limited performance.

Quantization

Distribution-Flexible Subset Quantization for Post-Quantizing Super-Resolution Networks

1 code implementation10 May 2023 Yunshan Zhong, Mingbao Lin, Jingjing Xie, Yuxin Zhang, Fei Chao, Rongrong Ji

Compared to the common iterative exhaustive search algorithm, our strategy avoids the enumeration of all possible combinations in the universal set, reducing the time complexity from exponential to linear.

Quantization Super-Resolution

Bi-directional Masks for Efficient N:M Sparse Training

1 code implementation13 Feb 2023 Yuxin Zhang, Yiting Luo, Mingbao Lin, Yunshan Zhong, Jingjing Xie, Fei Chao, Rongrong Ji

We focus on addressing the dense backward propagation issue for training efficiency of N:M fine-grained sparsity that preserves at most N out of M consecutive weights and achieves practical speedups supported by the N:M sparse tensor core.

Real-Time Image Demoireing on Mobile Devices

1 code implementation4 Feb 2023 Yuxin Zhang, Mingbao Lin, Xunchao Li, Han Liu, Guozhi Wang, Fei Chao, Shuai Ren, Yafei Wen, Xiaoxin Chen, Rongrong Ji

In this paper, we launch the first study on accelerating demoireing networks and propose a dynamic demoireing acceleration method (DDA) towards a real-time deployment on mobile devices.

Automatic Network Pruning via Hilbert-Schmidt Independence Criterion Lasso under Information Bottleneck Principle

1 code implementation ICCV 2023 Song Guo, Lei Zhang, Xiawu Zheng, Yan Wang, Yuchao Li, Fei Chao, Chenglin Wu, Shengchuan Zhang, Rongrong Ji

In this paper, we try to solve this problem by introducing a principled and unified framework based on Information Bottleneck (IB) theory, which further guides us to an automatic pruning approach.

Network Pruning

Discriminator-Cooperated Feature Map Distillation for GAN Compression

1 code implementation CVPR 2023 Tie Hu, Mingbao Lin, Lizhou You, Fei Chao, Rongrong Ji

In contrast to conventional pixel-to-pixel match methods in feature map distillation, our DCD utilizes teacher discriminator as a transformation to drive intermediate results of the student generator to be perceptually close to corresponding outputs of the teacher generator.

Image Generation Knowledge Distillation

SMMix: Self-Motivated Image Mixing for Vision Transformers

1 code implementation ICCV 2023 Mengzhao Chen, Mingbao Lin, Zhihang Lin, Yuxin Zhang, Fei Chao, Rongrong Ji

Due to the subtle designs of the self-motivated paradigm, our SMMix is significant in its smaller training overhead and better performance than other CutMix variants.

Exploring Content Relationships for Distilling Efficient GANs

1 code implementation21 Dec 2022 Lizhou You, Mingbao Lin, Tie Hu, Fei Chao, Rongrong Ji

This paper proposes a content relationship distillation (CRD) to tackle the over-parameterized generative adversarial networks (GANs) for the serviceability in cutting-edge devices.

Shadow Removal by High-Quality Shadow Synthesis

1 code implementation8 Dec 2022 Yunshan Zhong, Lizhou You, Yuxin Zhang, Fei Chao, Yonghong Tian, Rongrong Ji

Specifically, the encoder extracts the shadow feature of a region identity which is then paired with another region identity to serve as the generator input to synthesize a pseudo image.

Image Generation Shadow Removal +1

Meta Architecture for Point Cloud Analysis

1 code implementation CVPR 2023 Haojia Lin, Xiawu Zheng, Lijiang Li, Fei Chao, Shanshan Wang, Yan Wang, Yonghong Tian, Rongrong Ji

However, the lack of a unified framework to interpret those networks makes any systematic comparison, contrast, or analysis challenging, and practically limits healthy development of the field.

3D Semantic Segmentation

Exploiting the Partly Scratch-off Lottery Ticket for Quantization-Aware Training

1 code implementation12 Nov 2022 Yunshan Zhong, Gongrui Nan, Yuxin Zhang, Fei Chao, Rongrong Ji

In QAT, the contemporary experience is that all quantized weights are updated for an entire training process.

Quantization

LAB-Net: LAB Color-Space Oriented Lightweight Network for Shadow Removal

1 code implementation27 Aug 2022 Hong Yang, Gongrui Nan, Mingbao Lin, Fei Chao, Yunhang Shen, Ke Li, Rongrong Ji

Finally, the LSA modules are further developed to fully use the prior information in non-shadow regions to cleanse the shadow regions.

Shadow Removal

Learning Best Combination for Efficient N:M Sparsity

1 code implementation14 Jun 2022 Yuxin Zhang, Mingbao Lin, Zhihang Lin, Yiting Luo, Ke Li, Fei Chao, Yongjian Wu, Rongrong Ji

In this paper, we show that the N:M learning can be naturally characterized as a combinatorial problem which searches for the best combination candidate within a finite collection.

Shadow-Aware Dynamic Convolution for Shadow Removal

2 code implementations10 May 2022 Yimin Xu, Mingbao Lin, Hong Yang, Fei Chao, Rongrong Ji

Inspired by the fact that the color mapping of the non-shadow region is easier to learn, our SADC processes the non-shadow region with a lightweight convolution module in a computationally cheap manner and recovers the shadow region with a more complicated convolution module to ensure the quality of image reconstruction.

Image Reconstruction Shadow Removal

Dynamic Dual Trainable Bounds for Ultra-low Precision Super-Resolution Networks

1 code implementation8 Mar 2022 Yunshan Zhong, Mingbao Lin, Xunchao Li, Ke Li, Yunhang Shen, Fei Chao, Yongjian Wu, Rongrong Ji

However, these methods suffer from severe performance degradation when quantizing the SR models to ultra-low precision (e. g., 2-bit and 3-bit) with the low-cost layer-wise quantizer.

Quantization Super-Resolution

CF-ViT: A General Coarse-to-Fine Method for Vision Transformer

1 code implementation8 Mar 2022 Mengzhao Chen, Mingbao Lin, Ke Li, Yunhang Shen, Yongjian Wu, Fei Chao, Rongrong Ji

Our proposed CF-ViT is motivated by two important observations in modern ViT models: (1) The coarse-grained patch splitting can locate informative regions of an input image.

OptG: Optimizing Gradient-driven Criteria in Network Sparsity

1 code implementation30 Jan 2022 Yuxin Zhang, Mingbao Lin, Mengzhao Chen, Fei Chao, Rongrong Ji

We prove that supermask training is to accumulate the criteria of gradient-driven sparsity for both removed and preserved weights, and it can partly solve the independence paradox.

Neural Architecture Search With Representation Mutual Information

1 code implementation CVPR 2022 Xiawu Zheng, Xiang Fei, Lei Zhang, Chenglin Wu, Fei Chao, Jianzhuang Liu, Wei Zeng, Yonghong Tian, Rongrong Ji

Building upon RMI, we further propose a new search algorithm termed RMI-NAS, facilitating with a theorem to guarantee the global optimal of the searched architecture.

Neural Architecture Search

Revisiting Discriminator in GAN Compression: A Generator-discriminator Cooperative Compression Scheme

1 code implementation NeurIPS 2021 Shaojie Li, Jie Wu, Xuefeng Xiao, Fei Chao, Xudong Mao, Rongrong Ji

In this work, we revisit the role of discriminator in GAN compression and design a novel generator-discriminator cooperative compression scheme for GAN compression, termed GCC.

Fine-grained Data Distribution Alignment for Post-Training Quantization

1 code implementation9 Sep 2021 Yunshan Zhong, Mingbao Lin, Mengzhao Chen, Ke Li, Yunhang Shen, Fei Chao, Yongjian Wu, Rongrong Ji

While post-training quantization receives popularity mostly due to its evasion in accessing the original complete training dataset, its poor performance also stems from scarce images.

Quantization

Error Controlled Actor-Critic

1 code implementation6 Sep 2021 Xingen Gao, Fei Chao, Changle Zhou, Zhen Ge, Chih-Min Lin, Longzhi Yang, Xiang Chang, Changjing Shang

On error of value function inevitably causes an overestimation phenomenon and has a negative impact on the convergence of the algorithms.

Continuous Control

Training Compact CNNs for Image Classification using Dynamic-coded Filter Fusion

1 code implementation14 Jul 2021 Mingbao Lin, Bohong Chen, Fei Chao, Rongrong Ji

Each filter in our DCFF is firstly given an inter-similarity distribution with a temperature parameter as a filter proxy, on top of which, a fresh Kullback-Leibler divergence based dynamic-coded criterion is proposed to evaluate the filter importance.

Image Classification

You Only Compress Once: Towards Effective and Elastic BERT Compression via Exploit-Explore Stochastic Nature Gradient

1 code implementation4 Jun 2021 Shaokun Zhang, Xiawu Zheng, Chenyi Yang, Yuchao Li, Yan Wang, Fei Chao, Mengdi Wang, Shen Li, Jun Yang, Rongrong Ji

Motivated by the necessity of efficient inference across various constraints on BERT, we propose a novel approach, YOCO-BERT, to achieve compress once and deploy everywhere.

AutoML Model Compression

1xN Pattern for Pruning Convolutional Neural Networks

1 code implementation31 May 2021 Mingbao Lin, Yuxin Zhang, Yuchao Li, Bohong Chen, Fei Chao, Mengdi Wang, Shen Li, Yonghong Tian, Rongrong Ji

We also provide a workflow of filter rearrangement that first rearranges the weight matrix in the output channel dimension to derive more influential blocks for accuracy improvements and then applies similar rearrangement to the next-layer weights in the input channel dimension to ensure correct convolutional operations.

Network Pruning

Lottery Jackpots Exist in Pre-trained Models

2 code implementations18 Apr 2021 Yuxin Zhang, Mingbao Lin, Yunshan Zhong, Fei Chao, Rongrong Ji

Existing studies achieve the sparsity of neural networks via time-consuming weight training or complex searching on networks with expanded width, which greatly limits the applications of network pruning.

Network Pruning

SiMaN: Sign-to-Magnitude Network Binarization

2 code implementations16 Feb 2021 Mingbao Lin, Rongrong Ji, Zihan Xu, Baochang Zhang, Fei Chao, Chia-Wen Lin, Ling Shao

In this paper, we show that our weight binarization provides an analytical solution by encoding high-magnitude weights into +1s, and 0s otherwise.

Binarization

Error Controlled Actor-Critic Method to Reinforcement Learning

no code implementations1 Jan 2021 Xingen Gao, Fei Chao, Changle Zhou, Zhen Ge, Chih-Min Lin, Longzhi Yang, Xiang Chang, Changjing Shang

In the reinforcement learning (RL) algorithms which incorporate function approximation methods, the approximation error of value function inevitably cause overestimation phenomenon and have a negative impact on the convergence of the algorithms.

Continuous Control OpenAI Gym +2

Learning Efficient GANs for Image Translation via Differentiable Masks and co-Attention Distillation

1 code implementation17 Nov 2020 Shaojie Li, Mingbao Lin, Yan Wang, Fei Chao, Ling Shao, Rongrong Ji

The latter simultaneously distills informative attention maps from both the generator and discriminator of a pre-trained model to the searched generator, effectively stabilizing the adversarial training of our light-weight model.

Translation

Task Augmentation by Rotating for Meta-Learning

1 code implementation arXiv 2020 Jialin Liu, Fei Chao, Chih-Min Lin

Data augmentation is one of the most effective approaches for improving the accuracy of modern machine learning models, and it is also indispensable to train a deep model for meta-learning.

Data Augmentation Few-Shot Learning

Stock Prices Prediction using Deep Learning Models

no code implementations25 Sep 2019 Jialin Liu, Fei Chao, Yu-Chen Lin, Chih-Min Lin

The results show that predicting stock price through price rate of change is better than predicting absolute prices directly.

Decoder Choice Network for Meta-Learning

1 code implementation arXiv 2019 Jialin Liu, Fei Chao, Longzhi Yang, Chih-Min Lin, Qiang Shen

This work proposes a method that controls the gradient descent process of the model parameters of a neural network by limiting the model parameters in a low-dimensional latent space.

Ensemble Learning Few-Shot Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.