Lecture Browser


Camera Layout in Philiips 101
Camera Layout in Philips 101

The goal of this project is to support the digitization, storage and on-line browsing of the lectures in Philips 101. The room will support three cameras in the positions shown in the diagram above. Camera 2 will allow us to capture the blackboard/projection-screen and the podium, providing a wide-angle view. Camera 3 will allow us to focus in on the lecturer, using either the built-in tracking software or software developed by CS department. Camera 1 will look out on the audience. Using three movable cameras will allow us to capture much more of a sense of "room presence" than a single fixed camera. Each camera will be connected to a PC in a back room. The PCs will digitize the incoming audio and video, upload it onto a web server, and control the cameras using the serial interface. An additional PC in the podium will be loaded with software used to control all three cameras. This PC will serve as the central "control panel."

Hardware

The cameras are remote-controlled SONY-VISCA EV-D31s, and support features like pan/tilt/zoom, auto focus, backlight compensation, object tracking etc.. The digitization of the camera video is done by Mpegator cards from Darim Vision Co. Ltd. The controlling PCs are all Pentium Pros running Windows 95.

Software

The SONY-VISCA cameras are equipped with a command language and can be controlled both by software as well as by hand-held remote control devices. The control software has been written entirely in TCL-DP/Tk. The video capture on the Mpegator cards is also amenable to software control, and the start/stop-capture commands are sent from a Tcl/Tk user-interface via a DLL written in C.

Once the lecture is concluded, the MPEG files are shipped from each PC to a web server, where they are automatically post-processed. The post-processing is done based on predetermined rules, and parameterized by the log data that is continuously recorded as the lecture proceeds. The splicing, rearranging, etc. of the video streams will be done with the help of Rivl and CMT.

Once the final video is ready, it can be played back through a web browser, and synchronized with high-quality images of the lecture slides (if the images are available), so that as the lecture is viewed, the slide images flip in concert in a separate window on the web page. Conversely, if the user selects a certain slide, the video will be automatically fast-forwarded/rewinded to the point in the lecture where the slide first appeared. The web interface will look something like this:


This research is supported by DARPA (contract N00014-95-1-0799), Intel, Xerox, Microsoft, and Kodak

[Projects]