SAIC Logo
Georgia Tech College of Computing Logo

Robotics

Robots, like humans, generally have many different types of sensing and motor capabilities. In humans, there are several "maps" in the brain that represent the type of data or capabilities that the sensing has. For example, there is an area of the brain with a layout of our entire body where we feel touch, and areas that receive more data (such as hands) are represented by larger areas. When something goes wrong (for example an arm is amputated), these maps are automatically updated. This allows us to continue functioning as best we can after the change. Although robots have different types of sensing than humans (such as laser or less flexible vision), this project seeks to use insight from how the brain represents our sensing capabilities to create mappings of sensors, and automatically adapt or update the mappings when something in the robot goes wrong. This will make robots more robust and adaptive. For example, we are currently looking into modeling, using machine learning, how different sensors that give similar information correlate with one another. When a robot part breaks, some of the correlations break as well and the robot should be able to automatically detect this and adapt. Some of the types of failures we are trying to apply this to are loose wheels or a camera that has been hit and rotated. There is opportunity for students to work on a large variety of aspects of the project, including visualization of these maps and sensor data, vision and other perceptual processing, and designing and implementing learning algorithms.