Feature Selection Using Batch-Wise Attenuation and Feature Mask Normalization

26 Oct 2020  ·  Yiwen Liao, Raphaël Latty, Bin Yang ·

Feature selection is generally used as one of the most important preprocessing techniques in machine learning, as it helps to reduce the dimensionality of data and assists researchers and practitioners in understanding data. Thereby, by utilizing feature selection, better performance and reduced computational consumption, memory complexity and even data amount can be expected. Although there exist approaches leveraging the power of deep neural networks to carry out feature selection, many of them often suffer from sensitive hyperparameters. This paper proposes a feature mask module (FM-module) for feature selection based on a novel batch-wise attenuation and feature mask normalization. The proposed method is almost free from hyperparameters and can be easily integrated into common neural networks as an embedded feature selection method. Experiments on popular image, text and speech datasets have shown that our approach is easy to use and has superior performance in comparison with other state-of-the-art deep-learning-based feature selection methods.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods