Enhanced Residual Networks for Context-based Image Outpainting

14 May 2020  ·  Przemek Gardias, Eric Arthur, Huaming Sun ·

Although humans perform well at predicting what exists beyond the boundaries of an image, deep models struggle to understand context and extrapolation through retained information. This task is known as image outpainting and involves generating realistic expansions of an image's boundaries. Current models use generative adversarial networks to generate results which lack localized image feature consistency and appear fake. We propose two methods to improve this issue: the use of a local and global discriminator, and the addition of residual blocks within the encoding section of the network. Comparisons of our model and the baseline's L1 loss, mean squared error (MSE) loss, and qualitative differences reveal our model is able to naturally extend object boundaries and produce more internally consistent images compared to current methods but produces lower fidelity images.

PDF Abstract


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Outpainting Places365-Standard Residual Encoder MSE 0.7814 # 2
Adversarial 0.0941 # 1
L1 0.08 # 2


No methods listed for this paper. Add relevant methods here