Robot Learning from Demonstration of Expert Surgeons

Robot arm trying to pick up water bottleResearch on robot learning from expert demonstration focuses on re-programming a self-learning robot to perform various tasks, including surgical procedures. We have been successful in training the robot to identify various relevant objects and grasps used by a demonstrator using deep learning. The object poses and hand grasps are identified using the spatial distribution of the 3D data that are captured using depth sensors. This helps to constantly track the objects and hands
thereby generating a set of independent motions that are meaningfully recombined by the robot to reconstruct the demonstrated task. The current research focuses on “persistent vision” based on probabilistic estimation techniques to determine if an identified object is a real object or ghost object. The methods are implemented using an array of RGB and depth sensing cameras, tactile sensors, and proprioceptive sensors on a 5-DOF Kuka robot attached with an anthropomorphic robotic hand.