The challenge we are addressing in Cyberguide is how to build mobile applications that usefully leverage off of information about the context of the user. Initially, we are concerned with only a small part of the user's context, specifically location and orientation. Cyberguide provides a position-aware handheld tour guide for directing visitors around the GVU Lab during our monthly open houses.
Visitors to a GVU open house are typically given a map of the various labs and an information packet describing all of the projects that are being demonstrated at various sites. In building Cyberguide, we wanted to support the tasks of the visitor to the GVU open house. Collapsing all of the paper-based information into a handheld intelligent tour guide that knew where you were, what you were looking at and could answer typical visitor questions provides a testbed for research questions on mobile, context-aware application development.
We have used Cyberguide on several occasions to date and have collected some usability data to aid in future designs. A screen dump from an initial prototype done using the Newton Message Pad is shown in Figure 1.
Figure 1: Screenshot of Cyberguide prototype
We wanted to build useful applications that might take advantage of the hardware developed in the PARCTab and InfoPad projects. There are a number of commercially available and relatively inexpensive handheld units that would suffice for our purposes, such as the Newton, a MagicCap machine or a pen-based PC.
For positioning, we considered the Active Badge system, but rejected it for reasons of cost and long-term objectives. The Active Badge system combines position detection with communication. For room-level granularity of position, this is reasonable since the communications range is on par with the position resolution. With Cyberguide, we chose to separate the wireless communications capabilities from the positioning system, so we could seek out more cost-effective solutions for both.
The map is the view the visitor is using to navigate. Visualizing and manipulating the map dominates the user interface of Cyberguide. It can be viewed at varying levels of detail and scrolled around. The visitor is indicated by location and orientation on the map (the arrowhead in Figure 1) and various demonstrations are also marked (as stars in Figure 1).
Information on a demonstration is revealed by an explicit pen touch on the map or by wandering "close" to a demo. Touching the name of the demo will move the user in hypertext fashion to an information space component (not shown in Figure 1) that describes relevant information on the project and people associated with that particular demonstration.
The positioning component provides constantly updated information on the location and orientation of the tourist. Our current prototype implements indoor positioning via a collection of TV remote control beacons broadcasting separate location IDs. When within range of a beacon, a custom IR transceiver unit (consisting of a separate IR sensor and a Motorola 68332 processor connected via serial port to the Newton) translates the ID into a map location and orientation. the additional processor unit allows for further customized extensions to the positioning system, such as an electronic compass. Optionally, we could use the built-in Newton IR transceiver coupled with individual Newton beacons. This option requires no additional hardware, but is less flexible.
We have designed an application-level protocol on top of Appletalk to facilitate communication between the Newton and the Internet. This communication mechanism permits a user to send e-mail, print documents, and eventually communicate with other Cyberguide users. Eventually, we will be able to support wireless Internet communication and this will greatly reduce the storage demands on the handheld unit and allow for connectivity to vast information sources, such as the World-Wide Web.
We also determined from our evaluation of users that we need to provide greater support to help the tourist find places of interest and guide them along the right path. This capability existed in the original prototypes, but was hidden from the tourist.
We can track where the tourist has been in their travels and use that information to provide better services, such as an automatically updated log of their visit, or advice on where to find similar demonstrations to ones that have already been seen. In the limited confines of the GVU lab, this might not be so important, but in a large museum or zoo, it would be ideal, especially if coupled with information about how crowded certain areas are (or whether the exhibits are currently visible and active).
Viewing a large detailed map on a small screen is a difficult issue. Much of the current HCI visualization research focuses on information spaces and not physical spaces. We currently support any number of discrete zoom levels for viewing a map, but we do not feel this is the most effective technique for maintaining context. The current platform for the prototypes makes such visualization research difficult, but we are already investigating other more open platforms.
Information depicted on the map is dynamic. Demonstrations sometimes changed locations during an open house, yet our prototype was unable to be updated dynamically to reflect this change. We have seen a similar problem with on-board navigational systems for automobiles that contain a large but static map of roads. We have two solutions to this problem which are currently being implemented. The first is to use the communications infrastructure to dynamically update the information base. The other approach, more difficult but more flexible, is to use machine vision to recognize a demonstration at some location.