Search Results for author: Xiaomin Wang

Found 10 papers, 1 papers with code

Temporally Resolution Decrement: Utilizing the Shape Consistency for Higher Computational Efficiency

no code implementations2 Dec 2021 Tianshu Xie, Xuan Cheng, Minghui Liu, Jiali Deng, Xiaomin Wang, Ming Liu

In this paper, we observe that the reduced image retains relatively complete shape semantics but loses extensive texture information.

Computational Efficiency

Feature Mining: A Novel Training Strategy for Convolutional Neural Network

no code implementations18 Jul 2021 Tianshu Xie, Xuan Cheng, Xiaomin Wang, Minghui Liu, Jiali Deng, Ming Liu

In this paper, we propose a novel training strategy for convolutional neural network(CNN) named Feature Mining, that aims to strengthen the network's learning of the local feature.

Go Small and Similar: A Simple Output Decay Brings Better Performance

no code implementations12 Jun 2021 Xuan Cheng, Tianshu Xie, Xiaomin Wang, Jiali Deng, Minghui Liu, Ming Liu

Regularization and data augmentation methods have been widely used and become increasingly indispensable in deep learning training.

Data Augmentation

Self-supervised Feature Enhancement: Applying Internal Pretext Task to Supervised Learning

no code implementations9 Jun 2021 Yuhang Yang, Zilin Ding, Xuan Cheng, Xiaomin Wang, Ming Liu

In this paper, we show that feature transformations within CNNs can also be regarded as supervisory signals to construct the self-supervised task, called \emph{internal pretext task}.

Self-Supervised Learning

White Paper Assistance: A Step Forward Beyond the Shortcut Learning

no code implementations8 Jun 2021 Xuan Cheng, Tianshu Xie, Xiaomin Wang, Jiali Deng, Minghui Liu, Ming Liu

The promising performances of CNNs often overshadow the need to examine whether they are doing in the way we are actually interested.

imbalanced classification

FocusedDropout for Convolutional Neural Network

no code implementations29 Mar 2021 Tianshu Xie, Minghui Liu, Jiali Deng, Xuan Cheng, Xiaomin Wang, Ming Liu

In convolutional neural network (CNN), dropout cannot work well because dropped information is not entirely obscured in convolutional layers where features are correlated spatially.

Selective Output Smoothing Regularization: Regularize Neural Networks by Softening Output Distributions

no code implementations29 Mar 2021 Xuan Cheng, Tianshu Xie, Xiaomin Wang, Qifeng Weng, Minghui Liu, Jiali Deng, Ming Liu

In this paper, we propose Selective Output Smoothing Regularization, a novel regularization method for training the Convolutional Neural Networks (CNNs).

Image Classification

Cut-Thumbnail: A Novel Data Augmentation for Convolutional Neural Network

1 code implementation9 Mar 2021 Tianshu Xie, Xuan Cheng, Minghui Liu, Jiali Deng, Xiaomin Wang, Ming Liu

In this paper, we propose a novel data augmentation strategy named Cut-Thumbnail, that aims to improve the shape bias of the network.

Classification Data Augmentation +4

Cannot find the paper you are looking for? You can Submit a new open access paper.