Geometry-Free View Synthesis: Transformers and no 3D Priors

ICCV 2021  ·  Robin Rombach, Patrick Esser, Björn Ommer ·

Is a geometric model required to synthesize novel views from a single image? Being bound to local convolutions, CNNs need explicit 3D biases to model geometric transformations. In contrast, we demonstrate that a transformer-based model can synthesize entirely novel views without any hand-engineered 3D biases. This is achieved by (i) a global attention mechanism for implicitly learning long-range 3D correspondences between source and target views, and (ii) a probabilistic formulation necessary to capture the ambiguity inherent in predicting novel views from a single image, thereby overcoming the limitations of previous approaches that are restricted to relatively small viewpoint changes. We evaluate various ways to integrate 3D priors into a transformer architecture. However, our experiments show that no such geometric priors are required and that the transformer is capable of implicitly learning 3D relationships between images. Furthermore, this approach outperforms the state of the art in terms of visual quality while covering the full distribution of possible realizations. Code is available at

PDF Abstract ICCV 2021 PDF ICCV 2021 Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Novel View Synthesis ACID impl.-nodepth FID 42.88 # 1
Novel View Synthesis ACID impl.-catdepth SSIM 0.42 # 1
Novel View Synthesis ACID hybrid NLL 5.341 # 1
PSIM 2.83 # 1
PSNR 15.54 # 1
Novel View Synthesis RealEstate10K Hybrid FID 48.84 # 1
PSNR 12.51 # 1
Novel View Synthesis RealEstate10K Impl.-depth NLL 4.836 # 1
PSIM 3.05 # 1
SSIM 0.44 # 1


No methods listed for this paper. Add relevant methods here