Cardie and homeland security grant

Cornell News; Bill Steele

Researchers from Cornell University, the University of Pittsburgh, and the University of Utah have launched a project seeking to train computers to scan text and make a determination as to whether its contents are fact or fiction. The Department of Homeland Security created the consortium of three universities as one of four that are exploring sophisticated techniques for information analysis and security-related computational technologies. "Lots of work has been done on extracting factual information--the who, what, where, when," said Cornell computer science professor Claire Cardie. "We're interested in seeing how we would extract information about opinions." The research aims to bridge the gap between the distinctly human form of intuitive intelligence and the more literal machine intelligence by giving meaning to sentences through novel machine-learning algorithms. Cardie says her team is also working to rate the sources of a work that a writer might cite. "We're making sure that any information is tagged with confidence. If it's low confidence, it's not useful information," she said.

http://www.news.cornell.edu/stories/Sept06/Cardie.homeland.ws.html

Date Posted: 12/06/2006

Bala's "Advanced Global Illumination" is released in 2nd edition

Kavita Bala, with co-authors Phil Dutre and Philippe Bekaert, has released the 2nd edition of their graduate textbook Advanced Global Illumination(Publishers A.K. Peters). This book provides the reader with a fundamental understanding of a broad class of rendering algorithms for realistic image synthesis.

Date Posted: 12/03/2006

Claire Cardie is co-PI on an NSF grant

Claire Cardie is co-PI on an NSF grant, "The Dynamics of Digital Deception in Computer Mediated Environments", together with PI Jeff Hancock (assistant professor, communications) and co-PI Mats Rooth (professor, linguistics).

In a Cornell Chronicle article, Hancock says that "By using their [Cardie and Rooth's] expertise in natural language processing and computational linguistics, we will see if we can determine if the very language of deceptive messages is different from that in messages which are not deceptive," said Hancock. "We should have ample opportunity to look at lies because usually people tell one to two lies a day, and these lies range from the trivial to the very serious, including deception between friends and family, in the workplace, and in security and intelligence contexts."

http://www.news.cornell.edu/stories/Nov06/SS.Hancock.html

Date Posted: 11/15/2006

Pages

Archived News: