In design of autonomous mobile systems a fundamental competence is the ability to navigate from one position to another. To facilitate design of systems that can operate in novel environment there is a need to endow them with methods for concurrent estimation of the structure of the environment (mapping) and use of such acquired information to self-localize. The problem of mapping from a geometric perspective is a well-known and widely studied problem. In robotics it is referred to as Simultaneous Localization and Mapping (SLAM) and in computer vision the same problem is referred to as Structure from Motion (SFM). To make SLAM tractable it is possible to adopt graphical estimation methods. Today it is possible to generate dense geometric models for description and navigation in rich environments. However purely geometric models has a number of challenges. Everyday environments have a lot of structure that can be leveraged for efficient interpretation and in addition interaction with humans is typically performed using semantic information rather than metric information. Going to a nearby coffee shop can be described in GPS coordinates or as the 3rd house on campus row. The second version makes a lot more sense to humans, the question is, how can we endow robots with a similar sense of structure? In this presentation we will present a general graphical framework for localization and mapping. We will discuss the design of a framework for semantic mapping, and give examples of how such systems can be designed from Wizard-of-Oz studies to fully operational systems.
Henrik I Christensen is the KUKA Chair of Robotics and a distinguished professor of Computing at Georgia Institute of Technology. He is also the director of the Center for Robotics and Intelligent Machines. He earned M.Sc. and Ph.D. degrees in Electrical Engineering at Aalborg University, Denmark. He has since then held positions at Aalborg University, Royal Institute of Technology, Sweden; and University of Pennsylvania prior to joining Georgia Tech. He has published more than 250 contributions across computer vision, artificial intelligence and robotics. He serves/d on a number of editorial boards including IJRR, PAMI, AutRob, IVC, ... He is the co-editor in chief of Trends and Foundations in Robotics. He received the Engelberger award 2011, generally considered the highest recognition by the industrial robotics community.