Search Results for author: Mingfu Liang

Found 14 papers, 5 papers with code

Evidential Active Recognition: Intelligent and Prudent Open-World Embodied Perception

no code implementations23 Nov 2023 Lei Fan, Mingfu Liang, Yunxuan Li, Gang Hua, Ying Wu

Active recognition enables robots to intelligently explore novel observations, thereby acquiring more information while circumventing undesired viewing conditions.

Uncertainty Quantification

Understanding Self-attention Mechanism via Dynamical System Perspective

no code implementations ICCV 2023 Zhongzhan Huang, Mingfu Liang, Jinghui Qin, Shanshan Zhong, Liang Lin

The self-attention mechanism (SAM) is widely used in various fields of artificial intelligence and has successfully boosted the performance of different models.

Exploring Compositional Visual Generation with Latent Classifier Guidance

no code implementations25 Apr 2023 Changhao Shi, Haomiao Ni, Kai Li, Shaobo Han, Mingfu Liang, Martin Renqiang Min

We show that this paradigm based on latent classifier guidance is agnostic to pre-trained generative models, and present competitive results for both image generation and sequential manipulation of real and synthetic images.

Image Generation

On Robust Numerical Solver for ODE via Self-Attention Mechanism

no code implementations5 Feb 2023 Zhongzhan Huang, Mingfu Liang, Liang Lin

With the development of deep learning techniques, AI-enhanced numerical solvers are expected to become a new paradigm for solving differential equations due to their versatility and effectiveness in alleviating the accuracy-speed trade-off in traditional numerical solvers.

A Generic Shared Attention Mechanism for Various Backbone Neural Networks

no code implementations27 Oct 2022 Zhongzhan Huang, Senwei Liang, Mingfu Liang, Liang Lin

The self-attention mechanism has emerged as a critical component for improving the performance of various backbone neural networks.

Data Augmentation Image Classification +3

The Lottery Ticket Hypothesis for Self-attention in Convolutional Neural Network

no code implementations16 Jul 2022 Zhongzhan Huang, Senwei Liang, Mingfu Liang, wei he, Haizhao Yang, Liang Lin

Recently many plug-and-play self-attention modules (SAMs) are proposed to enhance the model generalization by exploiting the internal information of deep convolutional neural networks (CNNs).

Crowd Counting

AlterSGD: Finding Flat Minima for Continual Learning by Alternative Training

no code implementations13 Jul 2021 Zhongzhan Huang, Mingfu Liang, Senwei Liang, wei he

Deep neural networks suffer from catastrophic forgetting when learning multiple knowledge sequentially, and a growing number of approaches have been proposed to mitigate this problem.

Continual Learning Semantic Segmentation

Blending Pruning Criteria for Convolutional Neural Networks

no code implementations11 Jul 2021 wei he, Zhongzhan Huang, Mingfu Liang, Senwei Liang, Haizhao Yang

One filter could be important according to a certain criterion, while it is unnecessary according to another one, which indicates that each criterion is only a partial view of the comprehensive "importance".

Clustering Network Pruning

CAP: Context-Aware Pruning for Semantic-Segmentation

1 code implementation6 Jan 2021 wei he, Meiqing Wu, Mingfu Liang, Siew-Kei Lam

In this paper, we advocate the importance of contextual information during channel pruning for semantic segmentation networks by presenting a novel Context-aware Pruning framework.

Network Pruning Segmentation +1

CAP-Context-Aware-Pruning-for-Semantic-Segmentation

1 code implementation6 Jan 2021 wei he, Meiqing Wu, Mingfu Liang, Siew-Kei Lam

In this paper, we advocate the importance of contextual information during channel pruning for semantic segmentation networks by presenting a novel Context-aware Pruning framework.

Network Pruning Segmentation +1

Efficient Attention Network: Accelerate Attention by Searching Where to Plug

1 code implementation28 Nov 2020 Zhongzhan Huang, Senwei Liang, Mingfu Liang, wei he, Haizhao Yang

Recently, many plug-and-play self-attention modules are proposed to enhance the model generalization by exploiting the internal information of deep convolutional neural networks (CNNs).

Instance Enhancement Batch Normalization: an Adaptive Regulator of Batch Noise

2 code implementations12 Aug 2019 Senwei Liang, Zhongzhan Huang, Mingfu Liang, Haizhao Yang

Batch Normalization (BN)(Ioffe and Szegedy 2015) normalizes the features of an input image via statistics of a batch of images and hence BN will bring the noise to the gradient of the training loss.

Image Classification

DIANet: Dense-and-Implicit Attention Network

3 code implementations25 May 2019 Zhongzhan Huang, Senwei Liang, Mingfu Liang, Haizhao Yang

Attention networks have successfully boosted the performance in various vision problems.

Ranked #139 on Image Classification on CIFAR-100 (using extra training data)

Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.