Search Results for author: Noha Radwan

Found 9 papers, 4 papers with code

NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections

1 code implementation CVPR 2021 Ricardo Martin-Brualla, Noha Radwan, Mehdi S. M. Sajjadi, Jonathan T. Barron, Alexey Dosovitskiy, Daniel Duckworth

We present a learning-based method for synthesizing novel views of complex scenes using only unstructured collections of in-the-wild photographs.

Scene Representation Transformer: Geometry-Free Novel View Synthesis Through Set-Latent Scene Representations

1 code implementation CVPR 2022 Mehdi S. M. Sajjadi, Henning Meyer, Etienne Pot, Urs Bergmann, Klaus Greff, Noha Radwan, Suhani Vora, Mario Lucic, Daniel Duckworth, Alexey Dosovitskiy, Jakob Uszkoreit, Thomas Funkhouser, Andrea Tagliasacchi

In this work, we propose the Scene Representation Transformer (SRT), a method which processes posed or unposed RGB images of a new area, infers a "set-latent scene representation", and synthesises novel views, all in a single feed-forward pass.

Novel View Synthesis Semantic Segmentation

Deep Auxiliary Learning for Visual Localization and Odometry

1 code implementation9 Mar 2018 Abhinav Valada, Noha Radwan, Wolfram Burgard

We evaluate our proposed VLocNet on indoor as well as outdoor datasets and show that even our single task model exceeds the performance of state-of-the-art deep architectures for global localization, while achieving competitive performance for visual odometry estimation.

Auxiliary Learning Visual Localization +1

VLocNet++: Deep Multitask Learning for Semantic Visual Localization and Odometry

no code implementations23 Apr 2018 Noha Radwan, Abhinav Valada, Wolfram Burgard

Semantic understanding and localization are fundamental enablers of robot autonomy that have for the most part been tackled as disjoint problems.

Outdoor Localization Scene Understanding +1

Topometric Localization with Deep Learning

no code implementations27 Jun 2017 Gabriel L. Oliveira, Noha Radwan, Wolfram Burgard, Thomas Brox

Compared to LiDAR-based localization methods, which provide high accuracy but rely on expensive sensors, visual localization approaches only require a camera and thus are more cost-effective while their accuracy and reliability typically is inferior to LiDAR-based methods.

Visual Localization Visual Odometry

Multimodal Interaction-aware Motion Prediction for Autonomous Street Crossing

no code implementations21 Aug 2018 Noha Radwan, Wolfram Burgard, Abhinav Valada

Learned representations from the traffic light recognition stream are fused with the estimated trajectories from the motion prediction stream to learn the crossing decision.

motion prediction

RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from Sparse Inputs

no code implementations CVPR 2022 Michael Niemeyer, Jonathan T. Barron, Ben Mildenhall, Mehdi S. M. Sajjadi, Andreas Geiger, Noha Radwan

We observe that the majority of artifacts in sparse input scenarios are caused by errors in the estimated scene geometry, and by divergent behavior at the start of training.

Novel View Synthesis

Cannot find the paper you are looking for? You can Submit a new open access paper.