Dynamic Filter Networks

NeurIPS 2016  ·  Bert De Brabandere, Xu Jia, Tinne Tuytelaars, Luc van Gool ·

In a traditional convolutional layer, the learned filters stay fixed after training. In contrast, we introduce a new framework, the Dynamic Filter Network, where filters are generated dynamically conditioned on an input. We show that this architecture is a powerful one, with increased flexibility thanks to its adaptive nature, yet without an excessive increase in the number of model parameters. A wide variety of filtering operations can be learned this way, including local spatial transformations, but also others like selective (de)blurring or adaptive feature extraction. Moreover, multiple such layers can be combined, e.g. in a recurrent architecture. We demonstrate the effectiveness of the dynamic filter network on the tasks of video and stereo prediction, and reach state-of-the-art performance on the moving MNIST dataset with a much smaller model. By visualizing the learned filters, we illustrate that the network has picked up flow information by only looking at unlabelled training data. This suggests that the network can be used to pretrain networks for various supervised tasks in an unsupervised way, like optical flow and depth estimation.

PDF Abstract NeurIPS 2016 PDF NeurIPS 2016 Abstract

Datasets


Results from the Paper


 Ranked #1 on Video Prediction on KTH (Cond metric)

     Get a GitHub badge

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Source Paper Compare
Video Prediction KTH DFN PSNR 27.26 # 15
SSIM 0.794 # 21
Cond 10 # 1
Pred 20 # 1

Methods


No methods listed for this paper. Add relevant methods here