Towards A Human Robot Interaction Framework with Marker-less Augmented Reality and Visual SLAM
Eranda Lakshantha and Simon Egerton
Faculty of Information Technology, Monash University
Abstract—This paper presents a novel framework for Human Robot Interaction (HRI) using marker-less Augmented Reality (AR). Unlike marker-based AR, marker-less AR does not require the environment to be instrumented with special markers and so it works favorably for unknown/unprepared environments. Current state-of-the-art visual SLAM approaches like PTAMM (Parallel Tracking and Multiple Mapping) achieve this with constrained motion models within local co-ordinate systems. Our framework relaxes motion model constraints enabling a wider range of camera movements to be robustly tracked and extends PTAMM with a series of linear transformations. The linear transformations enable AR markers to be seamlessly placed and tracked within a global co-ordinate system of any size. This allows us to place markers globally and view them from any direction and perspective, even when returning to the markers from a different direction or perspective. We report on the model's performance and show how the model can be applied to help humans interact with robots. In this paper we look at how they can assist robot navigation tasks.
Index Terms—augmented reality, human-robot interaction, robotics, SLAM
Cite: Eranda Lakshantha and Simon Egerton, "Towards A Human Robot Interaction Framework with Marker-less Augmented Reality and Visual SLAM," Jounal of Automation and Control Engineering, Vol. 2, No. 3, pp. 250-255, September, 2014. doi: 10.12720/joace.2.3.250-255
Index Terms—augmented reality, human-robot interaction, robotics, SLAM
Cite: Eranda Lakshantha and Simon Egerton, "Towards A Human Robot Interaction Framework with Marker-less Augmented Reality and Visual SLAM," Jounal of Automation and Control Engineering, Vol. 2, No. 3, pp. 250-255, September, 2014. doi: 10.12720/joace.2.3.250-255