21 papers with code • 3 benchmarks • 4 datasets
Predicting the visual context of an image beyond its boundary.
We demonstrate how combining the effectiveness of the inductive bias of CNNs with the expressivity of transformers enables them to model and thereby synthesize high-resolution images.
The challenging task of image outpainting (extrapolation) has received comparatively little attention in relation to its cousin, image inpainting (completion).
This way, the hallucinated details are integrated with the style of the original image, in an attempt to further boost the quality of the result and possibly allow for arbitrary output resolutions to be supported.
Near-term quantum computers will soon reach sizes that are challenging to directly simulate, even when employing the most powerful supercomputers.
In this paper, we study the problem of generating a set ofrealistic and diverse backgrounds when given only a smallforeground region.
The second challenge is how to maintain high quality in generated results, especially for multi-step generations in which generated regions are spatially far away from the initial input.
Although humans perform well at predicting what exists beyond the boundaries of an image, deep models struggle to understand context and extrapolation through retained information.
In this paper, a novel two-stage siamese adversarial model for image extrapolation, named Siamese Expansion Network (SiENet) is proposed.