Kavita Bala received her Ph.D. in computer science at the Massachusetts Institute of Technology (MIT). After her doctorate, she worked as a post-doctoral researcher in the Program of Computer Graphics at Cornell. She joins CS as an assistant professor in fall 2002.Bala's research is in the area of computer graphics; her research interests include algorithms and systems for interactive rendering, image-based modeling and rendering, and augmented reality. Increasingly, technology is permitting the acquisition of complex data sets; rendering these data sets remains a challenge. Bala's research focus is on scalable algorithms for rendering complex scenes with both high fidelity and interactive performance. This research is applicable to both synthetic and augmented-reality rendering. Bala has developed compact, high-dimensional representations and algorithms for interactive rendering of complex dynamic scenes while bounding approximation error. She has also developed hybrid hardware and software algorithms for fast high-fidelity image generation.
“Adaptive Shadow Maps”. In Proceedings of SIGGRAPH 2001 (August, 2001). (With R. Fernando, S. Fernandez, and D. Greenberg)
“Interactive Ray-traced Scene Editing Using Ray Segment Trees”. Tenth Eurographics Workshop on Rendering. (June, 1999). (With J. Dorsey and S. Teller)
“Radiance Interpolants for Accelerated Bounded-error Ray Tracing”. ACM Transactions on Graphics 18(3) (July, 1999): 213–256. (With J. Dorsey and S. Teller)
Tarleton Gillespie received his bachelor’s degree in English from Amherst College in 1994, and his master’s in communication in 1997, and his Ph.D. in 2002 from the University of California at San Diego.
His work focuses on the cultural and institutional arrangements surrounding media technologies, considering how power and practice are woven into their use, and the cultural notions of their value. In particular, he is interested in the way that law and technology sometimes do battle, but more often are often brought together to regulate human activity.
His research uses recent disputes over copyright and the Internet to analyze the historical contest over the nature of authorship, law, and technology. He is interested in the history of copyright, which he feels has borrowed particular romanticized notions of authorship and traditionally neutral notions of technology—to rationalize and naturalize one system of distribution of creative work, i.e., a corporate-driven commercial market system.
His theoretical purpose is to reject the deterministic tone of most claims about media and cultural expression, and replace them with an understanding of technology as a complex material artifact, but one that may be articulated in ways that seem to support one ideological agenda or another. He finds this argument an important one to make, especially now—precisely because the decisions made today will set the standards by which the Internet is developed and regulated in the future.
More broadly, Gillespie’s interests range from the First Amendment and new technologies, to animation and children’s media, to the Napster debate and the appropriation of commercial culture. He taught several courses on law and technology, and on children and media for the University of California at San Diego.
“The Stories Digital Tools Tell”. In New Media: Theses on Convergence, Media, and Digital Reproduction, (J. Caldwell and A. Everett, eds.) Routledge (forthcoming, February, 2003).
“Recognizable Ambiguity: Cartoon Imagery and American Childhood in Animaniacs”. In Symbolic Childhood (Dan Cook, ed.). Peter Lang Publishing. (June, 2002). (With C. Mukerji)
Steve Marschner obtained a Sc.B. degree in mathematics and computer science from Brown University in 1993, and his Ph.D. from Cornell in 1998. He held research positions at Hewlett–Packard Labs, Microsoft Research, and Stanford University before joining the CS faculty in 2002.
Marschner’s research interests are in the field of computer graphics, focusing on realistic rendering, digital photography, and high-resolution geometric modeling. Recent projects include using photographs to measure the appearance of materials, particularly human skin; a new model to efficiently simulate translucent materials, which has been widely implemented by the film-effects industry; and ongoing work on processing very high resolution geometric data for the Digital Michelangelo Project at Stanford. The overall goal of his work is to increase the richness and realism of computer generated images by using a mixture of computer vision and computer graphics techniques to import complexity from the real world.
“A Practical Model for Subsurface Light Transport”. In Proceedings of SIGGRAPH 2001 (August, 2001). (With H. Jensen, M. Levoy, and P. Hanrahan)
“Image-based BRDF Measurement Including Human Skin”. In Proceedings of 10th Eurographics Workshop on Rendering (June, 1999): 139–152. (With S. Westin, E. Lafortune, K. Torrance, and D. Greenberg)
“An Evaluation of Reconstruction Filters for Volume Rendering”. In Proceedings of Visualization ’94 (October 1994): 100–107. (With R. Lobb)