Once the lecture is completed, we enter the post-production phase. The purpose of post-production is to support the student and teacher in reviewing material across all lectures for a given course. Our goal for large-scale content generation is to automate the creation or augmentation of content. Audio and video records of what happened in the classroom should be automatically linked to the appropriate points in the prepared material. Public annotations by the teacher and private notes of the student should also be automatically integrated with the other streams of information to facilitate later review. All of this integration of various media streams should be enabled by the captured information from the live recording phase.
Recall that the ClassPad application generates a log of when the teacher or student advances from one slide to the next and when an annotation was made. When reconstructing the annotated views for later review, these logged events are used to tie the static information (prepared slides with student/teacher annotations) to the audio or video stream associated with that class. Figure 2 shows an example of an automatically-generated Web presentation from a single lecture in the HCI class with audio-enhanced links. The top frame shows thumbnail sketches of all slides from the lecture. The selected thumbnail image is magnified in the lower right frame. The lower left frame is divided into three main sections: a keywords sections shows words associated with the file to facilitate a content-based search; an audio section lists automatically-generated audio links indicating times in the lecture when that slide was visited; and a search link provides access to a search form for simple keyword search across all lecture notes. When an audio link is selected, an audio client is launched and begins playing the recorded lecture from that point in the lecture. We built our own streaming, indexable audio server and client players for this purpose.
The static nature of slides in the presentation teaching style makes it easy to automatically generate audio links. For other teaching styles, it is not always a simple matter to attach the audio links to parts of the prepared material (see the discussion of the registration problem in Section 4.2). On the left side of Figure 3 is another example of an automatically generated Web page containing audio links, generated from output in the discussion-style FCE seminar using the Apple MessagePad as the note-taking device. The Web-accessible notes show the prepared outline augmented with notes inserted at the right location. Selecting a note launches the audio player at the point in the discussion in which the note was initially generated. It is possible to hide and reveal these annotations, so that the original discussion outline can be seen alone, if desired.
We did not have tools to automatically generate audio- or video-enhanced review material for the public notes-style AI course. Instead, audio and video links were generated manually from the videotaped lecture and the analog video was digitized into a single audio file and segments of QuickTime video. On the right side of Figure 3 is an example of a lecture with audio (marked with an ``A'') and video (marked with a ``V'') links manually added. It is an interesting research question to ask how recorded information from the lecture (e.g., gestures gleaned from the video recording, segmenting the audio) can be processed to determine when audio links should be created and how they can meaningfully be attached to the material [2].