MRL: Learning to Mix with Attention and Convolutions

30 Aug 2022  ·  Shlok Mohta, Hisahiro Suganuma, Yoshiki Tanaka ·

In this paper, we present a new neural architectural block for the vision domain, named Mixing Regionally and Locally (MRL), developed with the aim of effectively and efficiently mixing the provided input features. We bifurcate the input feature mixing task as mixing at a regional and local scale. To achieve an efficient mix, we exploit the domain-wide receptive field provided by self-attention for regional-scale mixing and convolutional kernels restricted to local scale for local-scale mixing. More specifically, our proposed method mixes regional features associated with local features within a defined region, followed by a local-scale features mix augmented by regional features. Experiments show that this hybridization of self-attention and convolution brings improved capacity, generalization (right inductive bias), and efficiency. Under similar network settings, MRL outperforms or is at par with its counterparts in classification, object detection, and segmentation tasks. We also show that our MRL-based network architecture achieves state-of-the-art performance for H&E histology datasets. We achieved DICE of 0.843, 0.855, and 0.892 for Kumar, CoNSep, and CPM-17 datasets, respectively, while highlighting the versatility offered by the MRL framework by incorporating layers like group convolutions to improve dataset-specific generalization.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Multi-tissue Nucleus Segmentation CoNSeP GC-MHVN Dice 0.855 # 1
Jaccard Index 0.576 # 2
PQ 0.559 # 1
Multi-tissue Nucleus Segmentation Kumar GC-MHVN Dice 0.843 # 1
Jaccard Index 0.652 # 1
PQ 0.625 # 1

Methods