3D Room Layouts From A Single RGB Panorama
10 papers with code • 3 benchmarks • 3 datasets
Image: Zou et al
We propose an algorithm to predict room layout from a single image that generalizes across panoramas and perspective images, cuboid layouts and more general layouts (e. g. L-shape room).
We present a deep learning framework, called DuLa-Net, to predict Manhattan-world 3D room layouts from a single RGB panorama.
We present a new approach to the problem of estimating the 3D room layout from a single panoramic image.
We present HoHoNet, a versatile and efficient framework for holistic understanding of an indoor 360-degree panorama using a Latent Horizontal Feature (LHFeat).
Recent years have seen flourishing research on both semi-supervised learning and 3D room layout reconstruction.
Although significant progress has been made in room layout estimation, most methods aim to reduce the loss in the 2D pixel coordinate rather than exploiting the room structure in the 3D space.
A common approach has been to use standard convolutional networks to predict the corners and boundaries, followed by post-processing to generate the 3D layout.
We transform the image feature from a cubemap tile to the Hough space of a Manhattan world and directly map the feature to the geometric output.