no code implementations • 12 Feb 2024 • Laura Santos, Bernardo Carvalho, Catarina Barata, José Santos-Victor
In real-world settings, the proposed model performs similarly to a Kinect depth camera and manages to successfully estimate the 3D body poses in a much higher number of frames.
no code implementations • 8 Feb 2024 • Pedro Osório, Alexandre Bernardino, Ruben Martinez-Cantin, José Santos-Victor
Affordances are fundamental descriptors of relationships between actions, objects and effects.
1 code implementation • 4 Oct 2022 • André Santos, Nuno Ferreira Duarte, Atabak Dehban, José Santos-Victor
The human demonstrations were collected using our proposed VR platform, BoxED, which is a box packaging environment for simulating real-world objects and scenarios for fast and streamlined data collection with the purpose of teaching robots.
1 code implementation • 7 Oct 2019 • Manuel Serra Nunes, Atabak Dehban, Plinio Moreno, José Santos-Victor
In contrast, we argue that if these systems are to be used to guide action, necessarily, the actions the robot performs should be encoded in the predicted frames.
no code implementations • 1 Oct 2019 • Clebeson Canuto, Plinio Moreno, Jorge Samatelo, Raquel Vassallo, José Santos-Victor
We propose to use the uncertainty about each prediction as an online decision-making criterion for action anticipation.
no code implementations • 13 Mar 2019 • Jaeseok Kim, Nino Cauli, Pedro Vicente, Bruno Damas, Alexandre Bernardino, José Santos-Victor, Filippo Cavallo
In this paper, a robot is taught to perform two different cleaning tasks over a table, using a learning from demonstration paradigm.
1 code implementation • 16 Jul 2018 • João Borrego, Atabak Dehban, Rui Figueiredo, Plinio Moreno, Alexandre Bernardino, José Santos-Victor
Recent advances in deep learning-based object detection techniques have revolutionized their applicability in several fields.
no code implementations • 30 May 2018 • Mihai Andries, Atabak Dehban, José Santos-Victor
3D objects (artefacts) are made to fulfill functions.
no code implementations • 9 Apr 2018 • Giovanni Saponaro, Pedro Vicente, Atabak Dehban, Lorenzo Jamone, Alexandre Bernardino, José Santos-Victor
One of the open challenges in designing robots that operate successfully in the unpredictable human environment is how to make them able to predict what actions they can perform on objects, and what their effects will be, i. e., the ability to perceive object affordances.
1 code implementation • 28 Feb 2018 • Paul Schydlo, Mirko Rakovic, Lorenzo Jamone, José Santos-Victor
Recent approaches based on neural networks have led to encouraging results in the human action prediction problem both in continuous and discrete spaces.
1 code implementation • 27 Nov 2017 • Giampiero Salvi, Luis Montesano, Alexandre Bernardino, José Santos-Victor
The model is based on an affordance network, i. e., a mapping between robot actions, robot perceptions, and the perceived effects of these actions upon objects.