Search Results for author: Giulia Pasquale

Found 11 papers, 7 papers with code

Learn Fast, Segment Well: Fast Object Segmentation Learning on the iCub Robot

1 code implementation27 Jun 2022 Federico Ceola, Elisa Maiettini, Giulia Pasquale, Giacomo Meanti, Lorenzo Rosasco, Lorenzo Natale

In this work, we focus on the instance segmentation task and provide a comprehensive study of different techniques that allow adapting an object segmentation model in presence of novel objects or different domains.

Instance Segmentation Segmentation +1

Grasp Pre-shape Selection by Synthetic Training: Eye-in-hand Shared Control on the Hannes Prosthesis

1 code implementation18 Mar 2022 Federico Vasile, Elisa Maiettini, Giulia Pasquale, Astrid Florio, Nicolò Boccardo, Lorenzo Natale

In order to overcome the lack of data of this kind and reduce the need for tedious data collection sessions for training the system, we devise a pipeline for rendering synthetic visual sequences of hand trajectories.

Benchmarking Object Recognition

ROFT: Real-Time Optical Flow-Aided 6D Object Pose and Velocity Tracking

2 code implementations6 Nov 2021 Nicola A. Piga, Yuriy Onyshchuk, Giulia Pasquale, Ugo Pattacini, Lorenzo Natale

In this work, we introduce ROFT, a Kalman filtering approach for 6D object pose and velocity tracking from a stream of RGB-D images.

6D Pose Estimation using RGB Hand Pose Estimation +5

From Handheld to Unconstrained Object Detection: a Weakly-supervised On-line Learning Approach

no code implementations28 Dec 2020 Elisa Maiettini, Andrea Maracani, Raffaello Camoriano, Giulia Pasquale, Vadim Tikhanoff, Lorenzo Rosasco, Lorenzo Natale

We show that the robot can improve adaptation to novel domains, either by interacting with a human teacher (Active Learning) or with an autonomous supervision (Semi-supervised Learning).

Active Learning Line Detection +4

Fast Object Segmentation Learning with Kernel-based Methods for Robotics

1 code implementation25 Nov 2020 Federico Ceola, Elisa Maiettini, Giulia Pasquale, Lorenzo Rosasco, Lorenzo Natale

Our approach is validated on the YCB-Video dataset which is widely adopted in the computer vision and robotics community, demonstrating that we can achieve and even surpass performance of the state-of-the-art, with a significant reduction (${\sim}6\times$) of the training time.

Object Semantic Segmentation

Speeding-up Object Detection Training for Robotics with FALKON

no code implementations23 Mar 2018 Elisa Maiettini, Giulia Pasquale, Lorenzo Rosasco, Lorenzo Natale

We address the size and imbalance of training data by exploiting the stochastic subsampling intrinsic into the method and a novel, fast, bootstrapping approach.

object-detection Object Detection +1

Are we done with object recognition? The iCub robot's perspective

1 code implementation28 Sep 2017 Giulia Pasquale, Carlo Ciliberto, Francesca Odone, Lorenzo Rosasco, Lorenzo Natale

We report on an extensive study of the benefits and limitations of current deep learning approaches to object recognition in robot vision scenarios, introducing a novel dataset used for our investigation.

Image Retrieval Object +4

Incremental Robot Learning of New Objects with Fixed Update Time

1 code implementation17 May 2016 Raffaello Camoriano, Giulia Pasquale, Carlo Ciliberto, Lorenzo Natale, Lorenzo Rosasco, Giorgio Metta

We consider object recognition in the context of lifelong learning, where a robotic agent learns to discriminate between a growing number of object classes as it accumulates experience about the environment.

Active Learning General Classification +2

Real-world Object Recognition with Off-the-shelf Deep Conv Nets: How Many Objects can iCub Learn?

no code implementations13 Apr 2015 Giulia Pasquale, Carlo Ciliberto, Francesca Odone, Lorenzo Rosasco, Lorenzo Natale

In this paper we investigate such possibility, while taking further steps in developing a computational vision system to be embedded on a robotic platform, the iCub humanoid robot.

Image Retrieval Object Recognition +1

Cannot find the paper you are looking for? You can Submit a new open access paper.