Convolutions

Dilated convolution with learnable spacings

Introduced by Khalfaoui-Hassani et al. in Dilated convolution with learnable spacings

Dilated convolution with learnable spacings (DCLS) is a type of convolution that allows the spacings between the non-zero elements of the kernel to be learned during training. This makes it possible to increase the receptive field of the convolution without increasing the number of parameters, which can improve the performance of the network on tasks that require long-range dependencies.

A dilated convolution is a type of convolution that allows the kernel to be skipped over some of the input features. This is done by inserting zeros between the non-zero elements of the kernel. The effect of this is to increase the receptive field of the convolution without increasing the number of parameters.

DCLS takes this idea one step further by allowing the spacings between the non-zero elements of the kernel to be learned during training. This means that the network can learn to skip over different input features depending on the task at hand. This can be particularly helpful for tasks that require long-range dependencies, such as image segmentation and object detection.

DCLS has been shown to be effective for a variety of tasks, including image classification, object detection, and semantic segmentation. It is a promising new technique that has the potential to improve the performance of convolutional neural networks on a variety of tasks.

Source: Dilated convolution with learnable spacings

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Audio Classification 2 25.00%
Audio Tagging 1 12.50%
Classification 1 12.50%
Speech Recognition 1 12.50%
Image Classification 1 12.50%
Object Detection 1 12.50%
Semantic Segmentation 1 12.50%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories