The rise of deep learning has caused a paradigm shift in robotics research, favoring methods that require large amounts of data.
We present a method that infers contact pressure between a human body and a mattress from a depth image.
We produce a final animation by using inverse kinematics to guide a character's arm and hand to match the motion of the manipulation tool such as a knife or a frying pan.
We describe a physics-based method that simulates human bodies at rest in a bed with a pressure sensing mat, and present PressurePose, a synthetic dataset with 206K pressure images with 3D human poses and shapes.
We investigated the application of haptic feedback control and deep reinforcement learning (DRL) to robot-assisted dressing.
Our method does this by creating a second reward function that recognizes previously seen state sequences and rewards those by novelty, which is measured using autoencoders that have been trained on state sequences from previously discovered policies.
Transfer learning using domain randomization is a promising approach, but it usually assumes that the target environment is close to the distribution of the training environments, thus relying heavily on accurate system identification.
The physical implications of dressing are complicated by non-rigid garments, which can result in a robot indirectly applying high forces to a person's body.
Then, during the specialization training stage we selectively split the weights of the policy based on a per-weight metric that measures the disagreement among the multiple tasks.
Together, UP-OSI is a robust control policy that can be used across a wide range of dynamic models, and that is also responsive to sudden changes in the environment.