Search Results for author: Lorenzo Porzi

Found 28 papers, 6 papers with code

ConsistDreamer: 3D-Consistent 2D Diffusion for High-Fidelity Scene Editing

no code implementations CVPR 2024 Jun-Kun Chen, Samuel Rota Bulò, Norman Müller, Lorenzo Porzi, Peter Kontschieder, Yu-Xiong Wang

This paper proposes ConsistDreamer - a novel framework that lifts 2D diffusion models with 3D awareness and 3D consistency, thus enabling high-fidelity instruction-guided scene editing.

Dynamic 3D Gaussian Fields for Urban Areas

no code implementations5 Jun 2024 Tobias Fischer, Jonas Kulhanek, Samuel Rota Bulò, Lorenzo Porzi, Marc Pollefeys, Peter Kontschieder

We present an efficient neural 3D scene representation for novel-view synthesis (NVS) in large-scale, dynamic urban areas.

Mixed Reality Novel View Synthesis

Revising Densification in Gaussian Splatting

no code implementations9 Apr 2024 Samuel Rota Bulò, Lorenzo Porzi, Peter Kontschieder

In this paper, we address the limitations of Adaptive Density Control (ADC) in 3D Gaussian Splatting (3DGS), a scene representation method achieving high-quality, photorealistic results for novel view synthesis.

Management Novel View Synthesis

Robust Gaussian Splatting

no code implementations5 Apr 2024 François Darmon, Lorenzo Porzi, Samuel Rota-Bulò, Peter Kontschieder

In this paper, we address common error sources for 3D Gaussian Splatting (3DGS) including blur, imperfect camera poses, and color inconsistencies, with the goal of improving its robustness for practical applications like reconstructions from handheld phone captures.

VR-NeRF: High-Fidelity Virtualized Walkable Spaces

no code implementations5 Nov 2023 Linning Xu, Vasu Agrawal, William Laney, Tony Garcia, Aayush Bansal, Changil Kim, Samuel Rota Bulò, Lorenzo Porzi, Peter Kontschieder, Aljaž Božič, Dahua Lin, Michael Zollhöfer, Christian Richardt

We present an end-to-end system for the high-fidelity capture, model reconstruction, and real-time rendering of walkable spaces in virtual reality using neural radiance fields.

2k

GANeRF: Leveraging Discriminators to Optimize Neural Radiance Fields

no code implementations9 Jun 2023 Barbara Roessle, Norman Müller, Lorenzo Porzi, Samuel Rota Bulò, Peter Kontschieder, Matthias Nießner

Neural Radiance Fields (NeRF) have shown impressive novel view synthesis results; nonetheless, even thorough recordings yield imperfections in reconstructions, for instance due to poorly observed areas or minor lighting changes.

3D Scene Reconstruction Novel View Synthesis

DiffRF: Rendering-Guided 3D Radiance Field Diffusion

no code implementations CVPR 2023 Norman Müller, Yawar Siddiqui, Lorenzo Porzi, Samuel Rota Bulò, Peter Kontschieder, Matthias Nießner

We introduce DiffRF, a novel approach for 3D radiance field synthesis based on denoising diffusion probabilistic models.

Denoising

AutoRF: Learning 3D Object Radiance Fields from Single View Observations

no code implementations CVPR 2022 Norman Müller, Andrea Simonelli, Lorenzo Porzi, Samuel Rota Bulò, Matthias Nießner, Peter Kontschieder

We introduce AutoRF - a new approach for learning neural 3D object representations where each object in the training set is observed by only a single view.

Novel View Synthesis Object

Inferring Latent Domains for Unsupervised Deep Domain Adaptation

no code implementations25 Mar 2021 Massimiliano Mancini, Lorenzo Porzi, Samuel Rota Bulò, Barbara Caputo, Elisa Ricci

Most deep UDA approaches operate in a single-source, single-target scenario, i. e. they assume that the source and the target samples arise from a single distribution.

Unsupervised Domain Adaptation

Weakly Supervised Multi-Object Tracking and Segmentation

no code implementations3 Jan 2021 Idoia Ruiz, Lorenzo Porzi, Samuel Rota Bulò, Peter Kontschieder, Joan Serrat

We introduce the problem of weakly supervised Multi-Object Tracking and Segmentation, i. e. joint weakly supervised instance segmentation and multi-object tracking, in which we do not provide any kind of mask annotation.

Instance Segmentation Multi-Object Tracking +7

Improving Panoptic Segmentation at All Scales

no code implementations CVPR 2021 Lorenzo Porzi, Samuel Rota Bulò, Peter Kontschieder

Crop-based training strategies decouple training resolution from GPU memory consumption, allowing the use of large-capacity panoptic segmentation networks on multi-megapixel images.

Panoptic Segmentation Segmentation

Are we Missing Confidence in Pseudo-LiDAR Methods for Monocular 3D Object Detection?

no code implementations ICCV 2021 Andrea Simonelli, Samuel Rota Bulò, Lorenzo Porzi, Peter Kontschieder, Elisa Ricci

Pseudo-LiDAR-based methods for monocular 3D object detection have received considerable attention in the community due to the performance gains exhibited on the KITTI3D benchmark, in particular on the commonly reported validation split.

Monocular 3D Object Detection object-detection

Improving Optical Flow on a Pyramid Level

no code implementations ECCV 2020 Markus Hofinger, Samuel Rota Bulò, Lorenzo Porzi, Arno Knapitsch, Thomas Pock, Peter Kontschieder

In this work we review the coarse-to-fine spatial feature pyramid concept, which is used in state-of-the-art optical flow estimation networks to make exploration of the pixel flow search space computationally tractable and efficient.

Blocking Optical Flow Estimation

Towards Generalization Across Depth for Monocular 3D Object Detection

no code implementations ECCV 2020 Andrea Simonelli, Samuel Rota Bulò, Lorenzo Porzi, Elisa Ricci, Peter Kontschieder

While expensive LiDAR and stereo camera rigs have enabled the development of successful 3D object detection methods, monocular RGB-only approaches lag much behind.

Monocular 3D Object Detection Object +1

Learning Multi-Object Tracking and Segmentation from Automatic Annotations

no code implementations CVPR 2020 Lorenzo Porzi, Markus Hofinger, Idoia Ruiz, Joan Serrat, Samuel Rota Bulò, Peter Kontschieder

Training MOTSNet with our automatically extracted data leads to significantly improved sMOTSA scores on the novel KITTI MOTS dataset (+1. 9%/+7. 5% on cars/pedestrians), and MOTSNet improves by +4. 1% over previously best methods on the MOTSChallenge dataset.

Instance Segmentation Multi-Object Tracking +4

The Mapillary Traffic Sign Dataset for Detection and Classification on a Global Scale

no code implementations ECCV 2020 Christian Ertler, Jerneja Mislej, Tobias Ollmann, Lorenzo Porzi, Gerhard Neuhold, Yubin Kuang

In this paper, we introduce a traffic sign benchmark dataset of 100K street-level images around the world that encapsulates diverse scenes, wide coverage of geographical locations, and varying weather and lighting conditions and covers more than 300 manually annotated traffic sign classes.

Autonomous Driving Classification +4

Disentangling Monocular 3D Object Detection

no code implementations ICCV 2019 Andrea Simonelli, Samuel Rota Rota Bulò, Lorenzo Porzi, Manuel López-Antequera, Peter Kontschieder

In this paper we propose an approach for monocular 3D object detection from a single RGB image, which leverages a novel disentangling transformation for 2D and 3D detection losses and a novel, self-supervised confidence score for 3D bounding boxes.

3D Object Detection From Monocular Images Disentanglement +3

Seamless Scene Segmentation

5 code implementations CVPR 2019 Lorenzo Porzi, Samuel Rota Bulò, Aleksander Colovic, Peter Kontschieder

In this work we introduce a novel, CNN-based architecture that can be trained end-to-end to deliver seamless scene segmentation results.

Panoptic Segmentation Scene Segmentation +1

Boosting Domain Adaptation by Discovering Latent Domains

2 code implementations CVPR 2018 Massimiliano Mancini, Lorenzo Porzi, Samuel Rota Bulò, Barbara Caputo, Elisa Ricci

Our approach is based on the introduction of two main components, which can be embedded into any existing CNN architecture: (i) a side branch that automatically computes the assignment of a source sample to a latent domain and (ii) novel layers that exploit domain membership information to appropriately align the distribution of the CNN internal feature representations to a reference distribution.

Domain Adaptation

AutoDIAL: Automatic DomaIn Alignment Layers

2 code implementations ICCV 2017 Fabio Maria Carlucci, Lorenzo Porzi, Barbara Caputo, Elisa Ricci, Samuel Rota Bulò

Here we take a different route, proposing to align the learned representations by embedding in any given network specific Domain Alignment Layers, designed to match the source and target feature distributions to a reference one.

Domain Adaptation

Just DIAL: DomaIn Alignment Layers for Unsupervised Domain Adaptation

no code implementations21 Feb 2017 Fabio Maria Carlucci, Lorenzo Porzi, Barbara Caputo, Elisa Ricci, Samuel Rota Bulò

The empirical fact that classifiers, trained on given data collections, perform poorly when tested on data acquired in different settings is theoretically explained in domain adaptation through a shift among distributions of the source and target domains.

Unsupervised Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.