RGB-D SLAM, Left: without IMU, Right: with IMU


Prof. Henrik I. Christensen, Fall 2013

Simultaneous localization and mapping (SLAM) problems, especially without a priori knowledge of environment, is always focused on issues that whether a mobile robot can determine and record its location in an unknown environment while simultaneously constantly build up or update a consistent map of this environment.

Todays state-of-the-art SLAM techniques often use RGB-D cameras such as the Microsoft Kinect or ASUS Xtion. RGB-D cameras are novel sensing equipment that can capture RGB images as well as per-pixel depth information. It can generate depth estimates at a large number of pixels, by using active stereo or time-of-flight sensing system. The captured images have high data rates with accurate mid-resolution depth and appearance information. Such kind of RGB-D cameras can provide extremely detailed information even though they have a lower maximum range and limited field of view, which can be usually improved by combining with 2D laser scans.

However, performing SLAM task using a RGB-D camera on a mobile robot with a simple odometer could end up with lower quality of map due to the uncertainty of the robots pose. Also, it is very important to estimate the pose of the robot especially when it comes to building a three- dimensional map. Thus, we have to estimate not only location and heading orientation but also roll, pitch, and yaw for the mobile robot, or for the RGB-D camera on the robot to be accurate. Furthermore, it becomes more difficult if we perform SLAM on planar surface which consists of uniform color. This is because, there are no depth and color differences through the surface which can be used as clues or landmarks to guess the pose of the robot so that the robot cannot estimate its location and cannot improve the current map.

From this project, we developed a feature points registration method with a higher accuracy for RGB-D SLAM using a prior information estimated by an Inertial Measurement Unit (IMU).