How can we use rich hypertext representations before, during and after class? Can a student usefully access readings, lecture notes generated by the instructor, lecture notes taken by an individual student, lecture notes taken by all the students, and/or video of classroom lectures and interactions during a discussion? Can the student get the desired information efficiently? To what extent can the classroom interaction be extended beyond formal class hours by such a rich representation? What types of interaction can be facilitated if a rich representation of the course exists?
Current interactions often have a control token (a piece of chalk or pen) that indicates who can update a representation owned by the group, such as a blackboard. What happens when an electronic blackboard is distributed across large displays and on the screens of the personal interfaces, and each student can update the group representation at any time? How can group input be combined to form meaningful action? What group input should be sought? What will happen to traditional (Socratic) methods of teacher-student conversation when their conversations are always interruptable by (variably long) periods of delay? How can (should?) the teacher maintain meaningful direction over the discussion among her students, when anyone on the group might be free to take over the control token?
We will also need electronic course materials. Abowd already has all the materials for an introductory course on Human-Computer Interaction available on the Web as part of the teacher's packet for a book he coauthored on HCI for Prentice Hall. Through the multimedia courseware project, Guzdial, Stasko and Foley are producing other electronic repositories for various classes in the College of Computing. A recent project supervised by Abowd has resulted in a syllabus generation tool for Web-based course materials which is used to provide a more flexible interface to this repository of teaching materials. This summer, that tool will be rebuilt and significantly enhanced to handle other teaching materials, specifically indexed videotapes of lectures. It is our plan to test the HCI material out in the Winter of 1996 when Abowd next teaches a graduate introductory course on Human-Computer Interaction. We want to see how an active browser in the hands of every student will facilitate notetaking and communication among project groups. Since it will be a class on HCI, the experience will also teach the students a lot about the effect that an interface has on the work we perform. They can read about that in a book, but they will better understand it when they experience it first hand. This body of hypertext will be the beginning of an exploration of rich representations (question A) supporting asynchronous interaction. We will also examine how the availability of this rich representation helps or hinders the actual pedagogic style for such a lecture and project-based course.
We will instrument the classroom with multiple video cameras that can track and film a lecturer, capture what is written on a board, and also film student questions and comments. Building on Colin Pott's Mercury project, we will implement sufficient voice recognition to create a transcript of the lecture along with illustrations based on what was drawn or written on the board. This transcript would index into the video stream, so that selecting a point in the transcript could select the corresponding video sequence (and vice versa). We also intend to link the transcript with previous lectures and readings, to produce an "instant" multimedia textbook. These resources will be incorporated in our explorations of question A.
As part of the instrumented classroom students will use their personal interfaces to take notes that are synchronized to the multimedia record described in the above paragraph. These notes could also be linked in a content based way to previous lectures, other courses the student has taken, and readings. To what extent should these individual representations be shared, or merged in a common group representation? How should the group representations be incorporated in the individual representation? These issues are relevant to question A.
Abowd has recently started a long-term interaction with the Satellite Communications Division of Motorola investigating software architectures for global information systems that will integrate wireless paging and communications technology with Internet-like information infrastructures.
Atkeson and Abowd are organizing a workshop sponsored by the NSF to explore the overlap between robotics, sensing, and ubiquitous/embedded/mobile computing.
Abowd and Atkeson have organized a discussion group on campus to focus on the development of future computing environments, with particular interest in education, intelligent mobile devices and extensions of Web technology. This group has already resulted in stronger connections to the FutureNet project and advanced interdisciplinary design activities with the College of Architecture and the Manufacturing Research Center. A recent grant from the College of Computing has funded a project this summer involving intelligent mobile devices, and the result of that work will enable us to provide the correct kind of device to students in Abowd's HCI class this winter.
Harpold is developing undergraduate and graduate courses in digital fiction that could draw on this digital classroom environment to explore fictional texts by collaborative reading and writing (asynchronously and synchronously.) Students will weave their critical contributions into a corpus (centered on canonical digital fictions and interactive games) that evolves to reflect the shifting intentions of the course over multiple successive quarters.
We plan to apply to ARPA, ONR, and other military funding agencies interested in more effective military training.
Harpold is working with a group of colleagues in LCC specializing in performance theory, drama and film, to develop curricula and research projects that focus on the opportunities that asynchronous, distributed environments present for the study of aesthetic performance in multiple media.
We intend to pursue equipment donations from IBM (notebook computers), Motorola (PDAs and communication equipment), Apple (PDAs), hewlett-Packard (digital tablets) and other relevant companies.
We would like to fund a Graduate Research Assistant 1/2 time for the period July 1, 1995 to June 30, 1996. In the College of Computing, this would cost $16,000 for salary and $1670 for computing charges, a total of $17,670 for direct costs.