Our localization method combines neural network-based segmentation and classical techniques, and we are able to consistently locate our needle with 0. 73 mm RMS error in clean environments and 2. 72 mm RMS error in challenging environments with blood and occlusion.
As far as we know, this is the first time an RL-based agent is taught from visual data in a surgical robotics environment.
In this work, we propose a neural network architecture and associated planning algorithm that (1) learns a representation of the world useful for generating prospective futures after the application of high-level actions, (2) uses this generative model to simulate the result of sequences of high-level actions in a variety of environments, and (3) uses this same representation to evaluate these actions and perform tree search to find a sequence of high-level actions in a new environment.