Fire localization in images and videos is an important step for an autonomous system to combat fire incidents.
The network is jointly trained for both segmentation and classification, leading to improvement in the performance of the single-task image segmentation methods, and the previous methods proposed for fire segmentation.
Artificial intelligence (AI) and robotic coaches promise the improved engagement of patients on rehabilitation exercises through social interaction.
We also develop a set of complementary steps that boost the action recognition performance in the most challenging scenarios.
Ranked #3 on One-Shot 3D Action Recognition on NTU RGB+D 120
We fine-tuned the Mask-RCNN network for the specific task of segmenting the hand of the humanoid robot Vizzy.
The results show cost-sensitive active learning has similar accuracy to the standard active learning approach, while reducing in about half the executed movement.
The research of a socially assistive robot has a potential to augment and assist physical therapy sessions for patients with neurological and musculoskeletal problems (e. g. stroke).
Rehabilitation assessment is critical to determine an adequate intervention for a patient.
Our method, using fovea attention filtering and our generalized binary loss, achieves a relative video mAP improvement of 20% over the two-stream baseline in AVA, and is competitive with the state-of-the-art in the UCF101-24.
In this paper we propose an improved method for transfer learning that takes into account the balance between target and source data.
In this paper, a robot is taught to perform two different cleaning tasks over a table, using a learning from demonstration paradigm.
It then uses this information to learn a mapping between its own actions and those performed by a human in a shared environment.
Recent advances in deep learning-based object detection techniques have revolutionized their applicability in several fields.
One of the open challenges in designing robots that operate successfully in the unpredictable human environment is how to make them able to predict what actions they can perform on objects, and what their effects will be, i. e., the ability to perceive object affordances.
The model is based on an affordance network, i. e., a mapping between robot actions, robot perceptions, and the perceived effects of these actions upon objects.
A growing field in robotics and Artificial Intelligence (AI) research is human-robot collaboration, whose target is to enable effective teamwork between humans and robots.