Search Results for author: Antonino Furnari

Found 24 papers, 9 papers with code

Panoptic Segmentation using Synthetic and Real Data

no code implementations14 Apr 2022 Camillo Quattrocchi, Daniele Di Mauro, Antonino Furnari, Giovanni Maria Farinella

Motivated by this observation, we propose a pipeline which allows to generate synthetic images from 3D models of real environments and real objects.

Object Detection Panoptic Segmentation

Weakly Supervised Attended Object Detection Using Gaze Data as Annotations

no code implementations14 Apr 2022 Michele Mazzamuto, Francesco Ragusa, Antonino Furnari, Giovanni Signorello, Giovanni Maria Farinella

Since labeling large amounts of data to train a standard object detector is expensive in terms of costs and time, we propose a weakly supervised version of the task which leans only on gaze data and a frame-level label indicating the class of the attended object.

Frame Object Detection

Untrimmed Action Anticipation

no code implementations8 Feb 2022 Ivan Rodin, Antonino Furnari, Dimitrios Mavroeidis, Giovanni Maria Farinella

Experiments show that the performance of current models designed for trimmed action anticipation is very limited and more research on this task is required.

Action Anticipation Action Detection

Image-based Navigation in Real-World Environments via Multiple Mid-level Representations: Fusion Models, Benchmark and Efficient Evaluation

1 code implementation2 Feb 2022 Marco Rosano, Antonino Furnari, Luigi Gulino, Corrado Santoro, Giovanni Maria Farinella

All the proposed navigation models have been trained with the Habitat simulator on a synthetic office environment and have been tested on the same real-world environment using a real robotic platform.

PointGoal Navigation Scene Understanding

Ego4D: Around the World in 3,000 Hours of Egocentric Video

no code implementations13 Oct 2021 Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, Miguel Martin, Tushar Nagarajan, Ilija Radosavovic, Santhosh Kumar Ramakrishnan, Fiona Ryan, Jayant Sharma, Michael Wray, Mengmeng Xu, Eric Zhongcong Xu, Chen Zhao, Siddhant Bansal, Dhruv Batra, Vincent Cartillier, Sean Crane, Tien Do, Morrie Doulaty, Akshay Erapalli, Christoph Feichtenhofer, Adriano Fragomeni, Qichen Fu, Abrham Gebreselasie, Cristina Gonzalez, James Hillis, Xuhua Huang, Yifei HUANG, Wenqi Jia, Weslie Khoo, Jachym Kolar, Satwik Kottur, Anurag Kumar, Federico Landini, Chao Li, Yanghao Li, Zhenqiang Li, Karttikeya Mangalam, Raghava Modhugu, Jonathan Munro, Tullie Murrell, Takumi Nishiyasu, Will Price, Paola Ruiz Puentes, Merey Ramazanova, Leda Sari, Kiran Somasundaram, Audrey Southerland, Yusuke Sugano, Ruijie Tao, Minh Vo, Yuchen Wang, Xindi Wu, Takuma Yagi, Ziwei Zhao, Yunyi Zhu, Pablo Arbelaez, David Crandall, Dima Damen, Giovanni Maria Farinella, Christian Fuegen, Bernard Ghanem, Vamsi Krishna Ithapu, C. V. Jawahar, Hanbyul Joo, Kris Kitani, Haizhou Li, Richard Newcombe, Aude Oliva, Hyun Soo Park, James M. Rehg, Yoichi Sato, Jianbo Shi, Mike Zheng Shou, Antonio Torralba, Lorenzo Torresani, Mingfei Yan, Jitendra Malik

We introduce Ego4D, a massive-scale egocentric video dataset and benchmark suite.


Towards Streaming Egocentric Action Anticipation

no code implementations11 Oct 2021 Antonino Furnari, Giovanni Maria Farinella

In contrast, in this paper, we propose a "streaming" egocentric action anticipation evaluation protocol which explicitly considers model runtime for performance assessment, assuming that predictions will be available only after the current video segment is processed, which depends on the processing time of a method.

Action Anticipation Knowledge Distillation

Is First Person Vision Challenging for Object Tracking?

no code implementations31 Aug 2021 Matteo Dunnhofer, Antonino Furnari, Giovanni Maria Farinella, Christian Micheloni

Our study extensively analyses the performance of recent visual trackers and baseline FPV trackers with respect to different aspects and considering a new performance measure.

Human-Object Interaction Detection Object Tracking +1

Predicting the Future from First Person (Egocentric) Vision: A Survey

no code implementations28 Jul 2021 Ivan Rodin, Antonino Furnari, Dimitrios Mavroedis, Giovanni Maria Farinella

Egocentric videos can bring a lot of information about how humans perceive the world and interact with the environment, which can be beneficial for the analysis of human behaviour.

Future prediction

A Survey on Human-aware Robot Navigation

no code implementations22 Jun 2021 Ronja Möller, Antonino Furnari, Sebastiano Battiato, Aki Härmä, Giovanni Maria Farinella

This paper is concerned with the navigation aspect of a socially-compliant robot and provides a survey of existing solutions for the relevant areas of research as well as an outlook on possible future directions.

Activity Recognition Robot Navigation

Is First Person Vision Challenging for Object Tracking?

no code implementations24 Nov 2020 Matteo Dunnhofer, Antonino Furnari, Giovanni Maria Farinella, Christian Micheloni

Despite a few previous attempts to exploit trackers in FPV applications, a methodical analysis of the performance of state-of-the-art visual trackers in this domain is still missing.

Human-Object Interaction Detection Visual Object Tracking +1

On Embodied Visual Navigation in Real Environments Through Habitat

1 code implementation26 Oct 2020 Marco Rosano, Antonino Furnari, Luigi Gulino, Giovanni Maria Farinella

Visual navigation models based on deep learning can learn effective policies when trained on large amounts of visual observations through reinforcement learning.

Unsupervised Domain Adaptation Visual Navigation

Rolling-Unrolling LSTMs for Action Anticipation from First-Person Video

2 code implementations4 May 2020 Antonino Furnari, Giovanni Maria Farinella

The experiments show that the proposed architecture is state-of-the-art in the domain of egocentric videos, achieving top performances in the 2019 EPIC-Kitchens egocentric action anticipation challenge.

Action Anticipation Action Recognition +2

The EPIC-KITCHENS Dataset: Collection, Challenges and Baselines

2 code implementations29 Apr 2020 Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Sanja Fidler, Antonino Furnari, Evangelos Kazakos, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, Michael Wray

Our dataset features 55 hours of video consisting of 11. 5M frames, which we densely labelled for a total of 39. 6K action segments and 454. 2K object bounding boxes.

Knowledge Distillation for Action Anticipation via Label Smoothing

no code implementations16 Apr 2020 Guglielmo Camporese, Pasquale Coscia, Antonino Furnari, Giovanni Maria Farinella, Lamberto Ballan

Since multiple actions may equally occur in the future, we treat action anticipation as a multi-label problem with missing labels extending the concept of label smoothing.

Action Anticipation Autonomous Driving +1

EGO-CH: Dataset and Fundamental Tasks for Visitors BehavioralUnderstanding using Egocentric Vision

no code implementations3 Feb 2020 Francesco Ragusa, Antonino Furnari, Sebastiano Battiato, Giovanni Signorello, Giovanni Maria Farinella

Equipping visitors of a cultural site with a wearable device allows to easily collect information about their preferences which can be exploited to improve the fruition of cultural goods with augmented reality.

Object Recognition

Next-Active-Object prediction from Egocentric Videos

no code implementations10 Apr 2019 Antonino Furnari, Sebastiano Battiato, Kristen Grauman, Giovanni Maria Farinella

Although First Person Vision systems can sense the environment from the user's perspective, they are generally unable to predict his intentions and goals.

Egocentric Visitors Localization in Cultural Sites

no code implementations10 Apr 2019 Francesco Ragusa, Antonino Furnari, Sebastiano Battiato, Giovanni Signorello, Giovanni Maria Farinella

We consider the problem of localizing visitors in a cultural site from egocentric (first person) images.


Cannot find the paper you are looking for? You can Submit a new open access paper.