Cornell CS undergraduates Qian Huang, Isay Katsman, Zeqi Gu, and Horace He—members of the Cornell Undergraduate Vision and Learning (CUVL) club—under the direction of CS Professor Serge Belongie and Ser-Nam Lim (of Facebook AI), presented their research "Enhacing Adversarial Example Transferability with an Intermediate Level Attack" at the International Conference in Computer Vision (ICCV 2019) in Seoul, South Korea, as reported earlier.
Subsequent to the presentation of research that benefitted directly from Facebook support, the tech giant feted the one-year old group with another substantial grant. Read more about their successes in the Cornell Chronicle story “CS undergrads’ research sets sights on image hackers.” As reported by Melanie Lefkowitz:
[CS Professor Serge] Belongie is working with Facebook on a project to combat “deepfakes”—faked audio and video created using artificial intelligence.
The [Cornell Undergraduate Vision and Learning (CUVL)] club used the money to purchase graphics processing units—costly, powerful processors that are necessary for training most machine learning models, and generally available only to graduate students and faculty. Using central processing units—which is how most computers function—the thousands or millions of iterations needed to train machine learning algorithms could take weeks or months.
“If you have a good idea but you can’t verify it because it’s going to take a year to do the computation, then you can’t really do the research,” Huang said. “So now we have the tools we need to do the work.”
Facebook has since donated another [round of funding] to the club, which will pay for eight more graphics processing units, the students said.
In their paper, the students tackled the problem of adversarial examples—tiny tweaks to an image that are undetectable to the human eye but completely confusing to a neural network tasked with classifying images. Adversarial examples created by hackers or others with malicious intent could potentially disorient autonomous cars, for example, or subvert image recognition.
Read more at the Cornell Chronicle.
Read earlier story on CUVL.