Robotics
Faculty Advisor, Ron Arkin, PhD

Learning is a complex and time-consuming process, especially when one attempts to learn using just their own experiences. Humans, as well as many other species, use communication as a powerful way to share knowledge. This can be true for robots as well. However, robots today widely vary in perceptual and motor apparatus, ranging from a simple LEGO mindstorm robot with small wheels and primitive touch and light sensors, all the way to fully capable robots that can move around and manipulate objects with sophisticated laser and vision sensing. Just like communicating with a robot from a human perspective can be difficult, two robots that have so many differences will likely have to bridge their differences before communicating and sharing experiences. This project seeks to define the problem of how heterogeneous robots with widely different capabilities can share experiences gained in the world in order to speed up learning. This includes building up representations for things like objects and object categories from sensors, and finding out what features (e.g. color) they have in common that can be used when communicating. Some applications of this can be, for example, a robot like a Roomba that is trained by a human to learn object labels (like "couch"), but is then upgraded. How can the two robots figure out their differences and the teacher share knowledge so that the learner doesn't have to learn all of that experience by itself?
References:
Jung, D. & Zelinsky, A. (2000), 'Grounded Symbolic Communication between Heterogeneous Cooperating Robots', Autonomous Robots 8(3), 269--292. Link: http://citeseer.ist.psu.edu/367408.html
Steels, L. & Kaplan, F. (1999), Bootstrapping Grounded Word Semantics, Cambridge University Press. Link: http://citeseer.ist.psu.edu/506532.html

