Search Results for author: Luca Del Pero

Found 11 papers, 1 papers with code

Articulated motion discovery using pairs of trajectories

no code implementations CVPR 2015 Luca Del Pero, Susanna Ricco, Rahul Sukthankar, Vittorio Ferrari

We propose an unsupervised approach for discovering characteristic motion patterns in videos of highly articulated objects performing natural, unscripted behaviors, such as tigers in the wild.

Recovering Spatiotemporal Correspondence between Deformable Objects by Exploiting Consistent Foreground Motion in Video

no code implementations1 Dec 2014 Luca Del Pero, Susanna Ricco, Rahul Sukthankar, Vittorio Ferrari

Given unstructured videos of deformable objects, we automatically recover spatiotemporal correspondences to map one object to another (such as animals in the wild).

Object

Discovering the Physical Parts of an Articulated Object Class From Multiple Videos

no code implementations CVPR 2016 Luca Del Pero, Susanna Ricco, Rahul Sukthankar, Vittorio Ferrari

We propose a motion-based method to discover the physical parts of an articulated object class (e. g. head/torso/leg of a horse) from multiple videos.

Motion Segmentation Object +1

Collaborative Augmented Reality on Smartphones via Life-long City-scale Maps

no code implementations10 Nov 2020 Lukas Platinsky, Michal Szabados, Filip Hlasek, Ross Hemsley, Luca Del Pero, Andrej Pancik, Bryan Baum, Hugo Grimmett, Peter Ondruska

In this paper we present the first published end-to-end production computer-vision system for powering city-scale shared augmented reality experiences on mobile devices.

End-to-end learning of keypoint detection and matching for relative pose estimation

no code implementations2 Apr 2021 Antoine Fond, Luca Del Pero, Nikola Sivacki, Marco Paladini

We demonstrate our method for the task of visual localization of a query image within a database of images with known pose.

Camera Localization Keypoint Detection +1

What data do we need for training an AV motion planner?

no code implementations26 May 2021 Long Chen, Lukas Platinsky, Stefanie Speichert, Blazej Osinski, Oliver Scheel, Yawei Ye, Hugo Grimmett, Luca Del Pero, Peter Ondruska

If cheaper sensors could be used for collection instead, data availability would go up, which is crucial in a field where data volume requirements are large and availability is small.

Imitation Learning Motion Planning

SimNet: Learning Reactive Self-driving Simulations from Real-world Observations

1 code implementation26 May 2021 Luca Bergamini, Yawei Ye, Oliver Scheel, Long Chen, Chih Hu, Luca Del Pero, Blazej Osinski, Hugo Grimmett, Peter Ondruska

We train our system directly from 1, 000 hours of driving logs and measure both realism, reactivity of the simulation as the two key properties of the simulation.

Autonomy 2.0: Why is self-driving always 5 years away?

no code implementations16 Jul 2021 Ashesh Jain, Luca Del Pero, Hugo Grimmett, Peter Ondruska

Despite the numerous successes of machine learning over the past decade (image recognition, decision-making, NLP, image synthesis), self-driving technology has not yet followed the same trend.

Decision Making Image Generation

Quantity over Quality: Training an AV Motion Planner with Large Scale Commodity Vision Data

no code implementations3 Mar 2022 Lukas Platinsky, Tayyab Naseer, Hui Chen, Ben Haines, Haoyue Zhu, Hugo Grimmett, Luca Del Pero

This motivates the use of commodity sensors like cameras for data collection, which are an order of magnitude cheaper than HD sensor suites, but offer lower fidelity.

Motion Planning

Cannot find the paper you are looking for? You can Submit a new open access paper.