About Me
I am currently a senior research scientist at Georgia Tech Research Institute working in the Aerospace, Transportation and Advanced Systems Laboratory. Previously I was a graduate student studying under Dr. Ron Arkin as part of the Mobile Robot Lab within the College of Computing.
I grew up in North Olmsted Ohio, a suburb of Cleveland. I began my academic career at Northwestern University graduating with a Bachelor's degree in Psychology. After that, I worked for a bit as an editorial assistant for a financial magazine. I first returned to academia as a member of the research and development team at MIT's genome center helping to sequence the human genome. It was there, building robotic liquid handling systems, that I became interested in AI and robotics. Later I went on to work with production robots at Speedline and commercial software at Symantec while concurrently completing a Master's degree in Computer Science at Boston University. After completing the degree I taught at BU for a year before matriculating at Georgia Tech. I have a wonderful family.
My research focuses on developing the theoretical underpinnings necessary for human-robot social relations. My goal is to build robots that can not only interact with humans, but are also capable of representing, reasoning, and developing relationships with others. Towards this goal my research has developed innovative methods allowing a robot to decieve and examining how and when people trust robots.
My hobbies are skiing (my favorite), scuba diving, and hiking. I've scuba dived on the shipwrecks at Palawan in the Philippines. I am also part of a club of people which attempt to hike to the top of every state. I've climbed 37 so far.
Research Interests
My research borrows heavily from social psychology, behavioral economics, and artificial intelligence focusing on higher, cognitive, aspects of human-robot socialization such as relationship development, modeling of one's interactive partner, and reasoning about trust and deception. I utilize theories and methods from these fields to create robots that are capable of social interaction with an ordinary person in variety of different environments. My work has focused on the development of a framework based on social psychological and game theory that allows a robot to computationally represent its social interactions with a human. This framework has, in turn, led to insights into higher social phenomenon such as trust, deception, and stereotyping as well as computational methods that allow a robot to reason about whether a situation demands trust or warrants deception. With respect to applications, my primarily interested lies in the areas of healthcare and search and rescue. Our research on human-robot trust has, for example, focused on emergency evacuation scenarios in which a person must decide whether or not to follow a robot's evacuation directions. Overall, my research strives to positively influence both the development of interactive robot and the people that choose to use those robots.
My Google scholar page can be found here: Alan R. Wagner
Statements
Research Projects
Human-Robot Trust
-
Abstract: Trust. The term itself conjures vague notions of loving relationships and lifelong familial bonds. But is trust really so indefinable? The phenomena of trust has been seriously explored by numerous researchers for decades. Moreover, the notion of trust is not limited to interpersonal interaction. Rather, trust underlies the interactions of employers with their employees, banks with their customers, and of governments with their citizens. In many ways trust is a precursor to a great deal of normal interpersonal interaction.
For interactions involving humans and robots, an understanding of trust is particularly important. Because robots are embodied, their actions can have serious consequences for the humans around them. A great deal of research is currently focused on bringing robots out of labs and into people's homes and workplaces. These robots will interact with humans-such as children and the elderly-unfamiliar with the limitations of a robot. It is therefore critical that human-robot interaction research explore the topic of trust. In contrast to much of the prior work on trust, the research presented here does not begin with a model for trust. Rather, we begin with a very simple idea: if it is true that outcome matrices serve as a representation for interaction, then should it not also be true that some outcome matrices include trust while others do not? In other words, some interpersonal interactions require trust, yet others do not. If an outcome matrix can be used to represent all interactions then it should also represent those interactions which require trust. Our task then becomes one of determining what the conditions for trust are.
The goals of this project are to develop algorithms that all a robot to recognize if a situation demands trust on the part of the robot or the human, determine how much trust is required, and select the most trusted partner.
- Paul Robinette, Robert Allen, Wenchen Li, Ayanna M. Howard, and Alan R. Wagner. "Overtrust of Robots in Emergency Evacuation Scenarios." Proceedings of ACM/IEEE International Conference on Human-Robot Interaction (HRI 2016). Christchurch, New Zealand, pp. 101-108 [pdf]
- Paul Robinette, Alan R. Wagner, and Ayanna M. Howard. "Investigating human-robot trust in emergency scenarios: methodological lessons learned." Forthcoming
- Alan R. Wagner and Paul Robinette (2015). "Towards Robots that Trust: Human Subject Validation of the Situational Conditions for Trust." Interaction Studies, 16(1), pp. 89-117, 2015, [pdf]
- Robinette, P., Wagner, A. R., and Howard, A (2014). "Assessment of Robot Guidance Modalities Conveying Instructions to Humans in Emergency Situations" Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 14). Edinburgh, UK, 2014. [pdf]
- Robinette, P., Wagner, A. R., and Howard, A. (2014), "Modeling Human-Robot Trust in Emergencies", AAAI Spring Symposium, Stanford University [pdf].
- Alan Wagner (2013). "Developing Robots that Recognize when they are being Trusted" AAAI Spring Symposium, Stanford University. [pdf]
- Paul Robinette, Alan Wagner, Ayanna Howard (2013). "Building and Maintaining Trust Between Humans and Guidance Robots in an Emergency" AAAI Spring Symposium, Stanford University. [pdf]
- Alan R. Wagner and Ronald C Arkin (2011). "Recognizing Situations that Demand Trust." Proceedings of the 20th International Symposium on Robot and Human Interactive Communication (RO-MAN 2011). Atlanta, GA. [pdf]
- Alan R. Wagner (2009). "The Role of Trust and Realationships in Human-Robot Social Interaction." [pdf]
- Emergency Evacuation Experiment Overview [youtube]
Publications
Videos
Robot Deception
-
Abstract: Deception has a long and important history with respect to the study of intelligent systems. Primatologists note that the use of deception serves as an important
potential indicator of theory of mind. From a roboticist's perspective, the use and detection of deception is an important area of study especially with respect to
military domains. But what is deception? Bond and Robinson define deception as a false communication that tends to benefit the communicator. In this project we use both game and interdependence theory as tools for exploring the phenomena of deception. More specifically, we use an interdependence theory framework
and game theoretic notation to develop algorithms which allow a robot or artificial agent to recognize situations that warrant deception and to select the best deceptive strategy
given knowledge of the mark (the individual being deceived). We use both simulation and experiments involving real robots to test the hypothesis that the effectiveness of a deceiver's strategy is related to
the amount of knowledge the deceiver has concerning the mark. Our methodological approach of moving from definition, to representation, to algorithm ensures general applicability of our results to robots, agents, or possibly humans.
Moreover, our exploration of the phenomena of deception suggests methods by which deception can be reduced. This project also considers the ethical ramifications of creating robot's capable of deception.
- Romanian translation courtesy of azoft.
- Russian translation courtesy of Coupofy.
- Danish translation courtesy of Nastasya Zemina.
- Vietnamese translation courtesy of Ngoc Thao Nguyen.
- Alan R. Wagner (2014). "Lies and Deception: Robots that use Falsehood as a Social Strategy." In: Robots that Talk and Listen, J. Markowitz ed., De Gruyter, pgs. 207-229 [pdf]
- Alan R. Wagner, Ronald C. Arkin (2010) "Acting Deceptively: Providing Robots with the Capacity for Deception" International Journal of Social Robotics, to appear. The final publication is available at www.springerlink.com. [pdf]
- Alan R. Wagner and Ronald C Arkin (2009). "Robot Deception: Recognizing when a Robot Should Deceive." Proceedings of the IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA 2009). Daejeon, South Korea. [pdf]
There has been a lot of international interest in this topic. I owe many thanks to those individuals in other countries which have translated this work into other languages. Romanian, Russian, Danish, and Vietnamese are below. If anyone would like to send me a link in other languages I'll post them as soon as I can!
Publications
Videos
Robot Social Perception
-
Abstract: In order to act socially robots must sense and perceive various aspects of the social word around them. For instance, recognition of a person's age can assist with how the robot should interact and what types of information a robot should provide.
Moreover, context plays a critical role towards determine if a particular action (such as raising one's hand before speaking) is required, optional, or ludacris. This project explores methods for providing robots the ability to perceive the social attributes of the
world around them. Recently, in joint work with Zsolt Kira, I have begun to look at how computational representations from deep learning can be used to provide high-level perceptual information to a robot in a manner that is robust to one's environment.
- Jigar Doshi, Zsolt Kira, and Alan R. Wagner (2015) "From Deep Learning to Episodic Memories: Creating Categories of Visual Experiences" Proceedings from the Third Annual Conference on Advances in Cognitive Systems (ACS 2015). Atlanta, GA [pdf]
- Wagner, A. R., and Doshi, J (2013). "Who, how, where: Using Exemplars to Learn Social Concepts" Proceedings of the International Conference on Social Robotics (ICSR 13). Bristol, UK, 2013, pp 481-490 [pdf]
- A first-person streaming video for environment identification [youtube]
Publications
Videos
Using Stereotypes to Reason about Interaction
-
Abstract: Psychologists note that humans regularly use categories to simplify and speed the process
of person perception (Macrae & Bodenhausen, 2000). Macrae and Bodenhausen suggest
that categorical thinking influences a human's evaluations, impressions, and recollections
of the target. The influence of categorical thinking on interpersonal expectations is
commonly referred to as a stereotype. For better or for worse, stereotypes have a
profound impact on interpersonal interaction (Bargh, Chen, & Burrows, 1996; Biernat &
Kobrynowicz, 1997). Information processing models of human cognition suggest that the
formation and use of stereotypes may be critical for quick assessment of new interactive
partners (Bodenhausen, Macrae, & Garst, 1998). From the perspective of a roboticist the
question then becomes, can the use of stereotypes similarly speedup the process of
partner modeling for a robot?
This question is potentially critical for robots operating in complex, dynamic social
environments, such as search and rescue. In environments such as these the robot may not
have time to learn a model of their interactive partner through successive interactions.
Rather, the robot will likely need to bootstrap its modeling of the partner with
information from prior, similar partners. Stereotypes serve this purpose.
The goal of this project is to explore the creation and use of stereotypes by a robot to bootstrap the process of learning about
new interactive human partners. Moreover, we hope to learn about the type of information necessary for a robot to model a human partner
and how stereotypes fail.
- Alan R. Wagner (2015). "Robots that Stereotype: Creating and Using Categories of People for Human-Robot Interaction." The Journal of Human-Robot Interaction, 4(2), pp. 97-124 [pdf]
- Alan R. Wagner (2012). "Using Cluster-based Stereotyping to Foster Human-Robot Cooperation" Proceedings of IEEE International Conference on Intelligent Robots and Systems (IROS 2012). Vilamoura, Portugal, 2012. [pdf]
- Alan R. Wagner (2012). "The Impact of Stereotyping Errors on a Robot's Social Development" Proceedings of IEEE International Conference on Development and Learning (ICDL-EpiRob 2012). San Diego, CA, 2012. [pdf]
- Alan R. Wagner (2010). "Using Stereotypes to Understand Ones Interactive Partner" Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2010), Extended Abstract. Toronto, Canada. [pdf]
- Overview of stereotype learning and usage work in presented during a demo. [video]
- Slides presenting a portion of this material. [video]
- A brief video depicting the robot learning the occupational stereotype of a firefighter within the bounds of a coordination game. [video]
- A brief video depicting the robot learning the occupational stereotype of a EMT within the bounds of a coordination game. [video]
- A brief video depicting the robot learning the occupational stereotype of a police officer within the bounds of a coordination game. [video]
- A brief video demonstrating the robot's use of stereotypes with observations of the person's actions to predict their appearance. [video]
- A brief video demonstrating the robot's use of stereotypes to determine which perceptual feature is most distinguishing. [video]
Publications
Videos
Representing Social Interaction
- Abstract:This project explores the challenge of computational representating a robot or agent's interactions. A representation of interaction
suitable for implementation of a robot must be computational in nature, and yet, if the robot's interaction will also involve humans,
then this representation must have meaningful connections to social psychology. For many reasons, we settled on the outcome
matrix (aka the normal-form game) as a suitable representation of interaction which is implementable on a robot. The outcome matix
depicted to the right explicitly represents important information about an interaction, such as whom is interacting and how the selction
of specific actions will impact the other individual. Moreover, this representation has an extensive history in the
fields of game-theory, economics, operations research, neuroscience, and psychology. It is therefore a strong candidate for use on a robot.
The goals of this project are to develop algorithms and software that allow human-robot interaction researchers to 1) create outcome
matrices of any and every situation the robot encounters and 2) use outcome matrix notation to describe their experiments formally
in manner useful and repeatable by other scientists.
- Alan R. Wagner (2009). "Creating and Using Matrix Representations of Social Interaction." Proceedings of the 4th International Conference on Human-Robot Interaction (HRI 2009). San Diego, CA. [pdf]
- Alan R. Wagner (2008). "A Representation for Interaction" Proceedings of the ICRA 2008 Workshop: Social Interaction with Intelligent Indoor Robots (SI3R). Pasadena, CA, USA. [pdf]
- Alan R. Wagner (2009). "The Role of Trust and Relationships in Human-Robot Social Interaction." [pdf]
Publications
All Publications
Dissertation
- Alan R. Wagner (2009). "The Role of Trust and Relationships in Human-Robot Social Interaction." [pdf]
Book Chapters
- Paul Robinette, Alan R. Wagner, and Ayanna M. Howard. "Investigating human-robot trust in emergency scenarios: methodological lessons learned." Forthcoming
- Alan R. Wagner (2014). "Lies and Deception: Robots that use Falsehood as a Social Strategy." In: Robots that Talk and Listen, J. Markowitz ed., De Gruyter, pgs. 207-229 [pdf]
Journal Articles
- Paul Robinette, A. Howard, and Alan R. Wagner (2016). "The Effect of Robot Performance on Human-Robot Trust in Time-Critical Situations" Transactions on Human-Machine Systems, forthcoming
- Alan R. Wagner (2015). "Robots that Stereotype: Creating and Using Categories of People for Human-Robot Interaction." The Journal of Human-Robot Interaction, 4(2), pp. 97-124 [pdf]
- Alan R. Wagner and Paul Robinette (2015). "Towards Robots that Trust: Human Subject Validation of the Situational Conditions for Trust." Interaction Studies, 16(1), pp. 89-117, 2015, [pdf]
- Ronald Arkin, Patrick Ulam, and Alan R. Wagner (2012). "Moral Decision-making in Autonomous Systems: Enforcement, Moral Emotions, Dignity, Trust, and Deception." Proceedings of the IEEE [pdf]
- Alan R. Wagner and Ronald C. Arkin (2010). "Acting Deceptively: Providing Robots with the Capacity for Deception." International Journal of Social Robotics, 3, pp. 5-26. The final publication is available at www.springerlink.com. [pdf].
- Alan R. Wagner and Ronald C. Arkin (2008). "Analyzing Social Situations for Human-Robot Interaction." Interaction Studies, 10(2). [pdf]
Conference Papers
- Paul Robinette, Robert Allen, Wenchen Li, Ayanna M. Howard, and Alan R. Wagner. "Overtrust of Robots in Emergency Evacuation Scenarios." Proceedings of ACM/IEEE International Conference on Human-Robot Interaction (HRI 2016). Christchurch, New Zealand, pp. 101-108 [pdf]
- Paul Robinette, Ayanna Howard, and Alan R. Wagner (2015) "Timing is Key For Robot Trust Repair" Seventh International Conference on Social Robotics (ICSR 2015). Paris, France. [pdf]
- Jigar Doshi, Zsolt Kira, and Alan R. Wagner (2015) "From Deep Learning to Episodic Memories: Creating Categories of Visual Experiences" Proceedings from the Third Annual Conference on Advances in Cognitive Systems (ACS 2015). Atlanta, GA [pdf]
- Robinette, P., Wagner, A. R., and Howard, A (2014). "Assessment of Robot Guidance Modalities Conveying Instructions to Humans in Emergency Situations" Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 14). Edinburgh, UK, 2014. [pdf]
- Wagner, A. R., and Doshi, J (2013). "Who, how, where: Using Exemplars to Learn Social Concepts" Proceedings of the International Conference on Social Robotics (ICSR 13). Bristol, UK, 2013, pp 481-490 [pdf]
- Willy Barnett, Wagner, A. R., and Kathy Keeling (2013). "Social Robots and Older Adults: Some Ethical Concerns for Researchers" Proceedings of Fifth International Conference on Internet Technologies and Applications (ITA 13). Wrexham, North Wales, UK, 2013
- Alan R. Wagner (2012). "Using Cluster-based Stereotyping to Foster Human-Robot Cooperation" Proceedings of IEEE International Conference on Intelligent Robots and Systems (IROS 2012). Vilamoura, Portugal, 2012. [pdf]
- Alan R. Wagner (2012). "The Impact of Stereotyping Errors on a Robot's Social Development" Proceedings of IEEE International Conference on Development and Learning (ICDL-EpiRob 2012). San Diego, CA, 2012. [pdf]
- Alan R. Wagner and Ronald C Arkin (2011). "Recognizing Situations that Demand Trust." Proceedings of the 20th International Symposium on Robot and Human Interactive Communication (RO-MAN 2011). Atlanta, GA. [pdf]
- Alan R. Wagner and Ronald C Arkin (2009). "Robot Deception: Recognizing when a Robot Should Deceive." Proceedings of the IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA 2009). Daejeon, South Korea. [pdf]
- Alan R. Wagner (2009). "Creating and Using Matrix Representations of Social Interaction." Proceedings of the 4th International Conference on Human-Robot Interaction (HRI 2009). San Diego, CA. [pdf]
- Patrick Ulam, Yoichiro Endo, Alan R. Wagner, Ronald C. Arkin (2007). "Integrated Mission Specification and Task Allocation for Robot Teams-Design and Implementation." Proceedings of IEEE International Conference on Robotics and Automation (ICRA 2007). Rome, Italy. [pdf]
- Alan R. Wagner and Ronald C Arkin (2006). "A Framework for Situation-based Social Interaction." Proceedings of the 15th International Symposium on Robot and Human Interactive Communication (RO-MAN 2006). Hatfield, United Kingdom. [pdf]
- Alan R. Wagner, Yoichiro Endo, Patrick Ulam, Ronald C. Arkin (2006). Multi-Robot User Interface Modeling. In Distributed Autonomous Robotics Systems 7. M. Gini and R. Voyles (eds.). Tokyo, Japan, Springer-Verlag. [pdf]
- Alan R. Wagner and Ronald C Arkin (2004) "Multi-Robot Communication-Sensitive Reconnaissance." Proceedings of IEEE International Conference on Robotics and Automation (ICRA 2004). New Orleans, LA, USA. [pdf]
- Alan R. Wagner and Ronald C Arkin (2003) "Internalized Plans for Communication-Sensitive Robot Team Behaviors." Proceedings of IEEE International Conference on Intelligent Robots and Systems (IROS 2003). Las Vegas, NV, USA. [pdf]
Technical Reports
- Alan R. Wagner, Ronald C. Arkin (2010) "Acting Deceptively: Providing Robots with the Capacity for Deception" Technical report GIT-GVU-10-01, College of Computing, Georgia Institute of Technology [pdf]
- Patrick Ulam, Yoichiro Endo, Alan R. Wagner, Ronald C. Arkin (2007) "Integrated Mission Specification and Task Allocation for Robot Teams - Part 2: Testing and Evaluation,Analyzing Social Situations for Human-Robot Interaction." technical report GIT-GVU-07-02, College of Computing, Georgia Institute of Technology [pdf]
Workshops (refereed)
- Zsolt Kira*, Alan R. Wagner, Chris Kennedy, Jason Zutty, Grady Tuell (2015). "STAC: A New Fusion Model for Complex Scene Characterization and Semantic Mapping" SPIE Conference on Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications, Baltimore, USA, 2015.
- Robinette, P., Wagner, A. R., and Howard, A. (2014), "Modeling Human-Robot Trust in Emergencies", AAAI Spring Symposium, Stanford University [pdf].
- Alan Wagner (2013). "Developing Robots that Recognize when they are being Trusted" AAAI Spring Symposium, Stanford University. [pdf]
- Paul Robinette, Alan Wagner, Ayanna Howard (2013). "Building and Maintaining Trust Between Humans and Guidance Robots in an Emergency" AAAI Spring Symposium, Stanford University. [pdf]
- Alan R. Wagner(2011). "Outcome Matrix based Phrase Selection" AAAI Fall Symposium, Washington DC. [pdf]
- Ronald C. Arkin, Alan R. Wagner, Brittany Duncan (2009). "Responsibility and Lethality for Unmanned Systems: Ethical Pre-mission Responsibilty Advisement" Proceedings of the ICRA 2009 Workshop on RoboEthics, Kobe, Japan. [pdf]
- Alan R. Wagner (2008). "A Representation for Interaction" Proceedings of the ICRA 2008 Workshop: Social Interaction with Intelligent Indoor Robots (SI3R). Pasadena, CA, USA. [pdf]
Abstracts (refereed)
- Alan R. Wagner (2010). "Using Stereotypes to Understand Ones Interactive Partner" Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2010), Extended Abstract. Toronto, Canada. [pdf]
CV
[pdf]
Software
[Software]