Artificial Intelligence Seminar

Spring 2017
Friday 12:00-1:15
Gates Hall 122

http://www.cs.cornell.edu/courses/CS7790/2016sp/

 

The AI seminar will meet weekly for lectures by graduate students, faculty, and researchers emphasizing work-in-progress and recent results in AI research. Lunch will be served starting at noon, with the talks running between 12:15 and 1:15. The new format is designed to allow AI chit-chat before the talks begin. Also, we're trying to make some of the presentations less formal so that students and faculty will feel comfortable using the seminar to give presentations about work in progress or practice talks for conferences.

If you or others would like to be deleted or added from this announcement, please contact Jessie White at jsw332@cornell.edu.

 

February 3rd, 2017

Speaker: Kilian Weinberger, Cornell

Host: Ross Knepper

Title: Deep Learning with Dense Connectivity

Abstract: Although half a decade has passed since Frank Rosenblatt's original work on multi-layer perceptrons, modern neural networks are still surprisingly similar to his original ideas. In this talk I will question one of the most fundamental design choices of neural networks in the context of new learning scenarios. As networks have become much deeper than had been possible or had even been imagined in the 1950s, it is no longer clear that the layer by layer connectivity pattern is a well-suited architecture. In the first part of the talk I will show that randomly removing layers during training can speed up the training process, make it more robust, and ultimately lead to better generalization.  We refer to this process as learning with stochastic depth,  as the effective depth of the network varies for each minibatch. In the second part of the talk I will propose an alternative connectivity pattern, Dense Connectivity, which is inspired by the insights that we obtained training with stochastic depth. Dense connectivity leads to substantial reductions in parameter sizes and significant improvement in generalization. 

“The AI-Seminar is sponsored by Yahoo!”


February 10th, 2017

Speaker: Ross Knepper, Cornell

Host:

Title: A Framework for Robots Using Implicit Communication

Abstract: Implicit communication is defined as any exchange of information that leverages context for its interpretation.  Until recently, robots have largely ignored context, and they have therefore been extremely literal in their communicative behaviors.  By contrast, much of human communication is implicit.  Robots that are ignorant of implicit communication create confusion and sew distrust by failing to understand how their actions will be interpreted by people.  In this talk, I compare several domains which have studied implicit communication, including implicature from linguistics and legible motion from robotics.  I give the conditions under which implicit communication is possible, and I give a mathematical framework by which robots can generate and understand implicit communication.

Bio:

“The AI-Seminar is sponsored by Yahoo!”

February 17th, 2017

Speakers: Paul Upchurch, Jacob Gardner & Geoff Pleiss

Host: Ross Knepper

Title: Deep Feature Interpolation: Changing Image Content With Neural Networks

Abstract: Photo-realistic editing of images is a challenging task that requires hundreds of skilled artists and programmers to produce a CG movie. Recent advances in computer vision and graphics have enabled certain types of editing--for example, artistic style editing--in an automated and data-driven way by leveraging deep convolutional feature spaces. In this talk, we describe our recent work focused on changing the content of an image with a data-driven model-free approach. This method is easy to implement and general, allowing us to change facial expressions, add facial hair, and fill in missing regions of photos. When combined with a technique for automatically aligning regions of faces, this method can be used to perform high-resolution face editing that dramatically outperforms existing machine learning techniques for data-driven content editing, and even outperforms professional artwork in some cases.

“The AI-Seminar is sponsored by Yahoo!”

 

February 24th, 2017

 


Speaker: Albert Gordo, Xerox

Host: Sarah Tan

Title: Learning deep image representations for visual search.

Abstract: In this talk I will discuss two of our recent papers on learning deep representations for image search in an end-to-end manner. In the first paper [ECCV'16 / IJCV] we address the problem of instance-level retrieval, where the goal is to retrieve images that contain instances of the same object as the query,  and where, so far, deep learning had only obtained underwhelming results. We show that having a network architecture tailored for retrieval, together with a siamese training procedure and high-quality training data (that can be obtained with an automatic cleaning procedure) allows us to learn models that significantly outperform the current state of the art on standard benchmarks, including deep and traditional methods. In the second paper [CVPR'17, under review] we move into the task of semantic retrieval, where the goal is to retrieve images that contain the same semantics as the query image, and where we leverage human captions at training time to learn a visual embedding that preserves this semantic similarity. As a by-product of the learning procedure, our model allows us to perform multimodal queries (image + text modifiers) and to provide visual explanations by highlighting the regions of the images that collaborated the most to their matching.

“The AI-Seminar is sponsored by Yahoo!”

March 3rd, 2017

Speaker: Erik Andersen, Cornell

Title: Automatic Instructional Scaffolding through Cognitive Modeling

Abstract: A key challenge in education is designing engaging instructional content that can be tailored to the needs of each student. Our research group is on a quest to do this automatically by building cognitive models of the knowledge we want to teach, and leveraging these models to generate, sequence, and optimize learning materials. I will present recent work applying these techniques to diverse educational topics such as foreign language, programming, math, and information security. I will also show how we can deploy and evaluate these approaches through video games and online tools with thousands of users.

“The AI-Seminar is sponsored by Yahoo!”

March 10th, 2017

Speaker: Andrew Wilson, Cornell

Host: Kilian Weinberger

Title:  Deep Learning with Uncertainty

Abstract: In this talk, we approach model construction from a probabilistic perspective.  First, we introduce a scalable Gaussian process framework capable of learning expressive kernel functions on large datasets.  We then develop this framework into an approach for deep kernel learning, with non-parametric capacity, inductive biases given by deep architectures, full predictive distributions, and automatic complexity calibration.  We will consider applications in image inpainting, crime prediction, epidemiology, counterfactuals, autonomous vehicles, astronomy, and human learning, including very recent state of the art results.

Bio: Andrew Gordon Wilson joined ORIE at Cornell University in August 2016 as an assistant professor.  Previously, he was a research fellow in the machine learning department at CMU with Eric Xing and Alex Smola.  He completed his PhD in machine learning with Zoubin Ghahramani at the University of Cambridge.  Andrew specializes in kernel methods, deep learning, and probabilistic modelling.

“The AI-Seminar is sponsored by Yahoo!”

March 17th, 2017

Speaker: Rahmtin Rotabi, Cornell

Host: Cristian Danescu-Niculescu-Mizil

Talk 1: Competition and Selection Among Conventions 

Abstract: In many domains, a latent competition among different conventions determines which one will come to dominate. These competitions happen in political rhetoric, terminology in technical contexts, and similar settings. In analyzing the dynamics of conventions over time, however, even with detailed on-line data, one encounters two significant challenges. First, as conventions evolve, the underlying substance of their meaning tends to change as well; and such substantive changes confound investigations of social effects. Second, the selection of a convention takes place through the complex interactions of individuals within a community, and contention between the users of competing conventions plays a key role in the convention's evolution. 

In this work we study a setting in which we can cleanly track the competition among conventions while controlling for changes in substance and taking into account the interactions of individuals. Our underlying data comes from authoring conventions in the source files of articles on the e-print arXiv, covering 25 years and over a million papers. 

__________________________________________


Talk 2: Detecting Strong Ties Using Network Motifs 

Abstract: Detecting strong ties among users in social and information networks is a fundamental operation that can improve performance on a multitude of personalization and ranking tasks. There are a variety of ways a tie can be deemed ``strong'', and in this work we use a data-driven (or supervised) approach by assuming that we are provided a sample set of edges labeled as strong ties in the network. Such labeled edges are often readily obtained from the social network as users often participate in multiple overlapping networks via features such as following and messaging. These networks may vary greatly in size, density and the information they carry --- for instance, a heavily-used dense network (such as the network of followers) commonly overlaps with a secondary sparser network composed of strong ties (such as a network of email or phone contacts). This setting leads to a natural strong tie detection task: given a small set of labeled strong tie edges, how well can one detect unlabeled strong ties in the remainder of the network? 

In this talk we will approach this problem using small network motifs, and we show that under extreme sparsity our method manages to detect strong ties with high precision given only this network information.

“The AI-Seminar is sponsored by Yahoo!”

March 24th, 2017

Speakers: Jack Hessel & Liye Fu

Host: Cristian Danescu-Niculescu-Mizil

Liye Fu's talk:

Title: When confidence and competence collide: Effects on online decision-making discussions

Abstract: Group discussions are a way for individuals to exchange ideas and arguments in order to reach better decisions than they could on their own. One of the premises of productive discussions is that better solutions will prevail, and that the idea selection process is mediated by the (relative) competence of the individuals involved. However, since people may not know their actual competence on a new task, their behavior is influenced by their self-estimated competence — that is, their confidence — which can be misaligned with their actual competence. 

Our goal in this work is to understand the effects of confidence-competence misalignment on the dynamics and outcomes of discussions. To this end, we design a large scale natural setting, in the form of an online team-based geography game, that allows us to disentangle confidence from competence and thus separate their effects. 

We find that in task-oriented discussions, the more-confident individuals have a larger impact on the group’s decisions even when these individuals are at the same level of competence as their teammates. Furthermore, this unjustified role of confidence in the decision-making process often leads teams to under-perform. We explore this phenomenon by investigating the effects of confidence on conversational dynamics. For example, we take up the question: do more-confident people introduce more ideas than the less-confident, or do they introduce the same number of ideas but their ideas get more uptake? Moreover, we show that the language people use is more predictive of a person’s confidence level than their actual competence. This also suggests potential practical applications, given that in many settings, true competence cannot be assessed before the task is completed, whereas the conversation can be tracked during the course of the problem-solving process.

This is joint work with Cristian Danescu-Niculescu-Mizil and Lillian Lee.

Jack Hessel's Talk:

Title: Cats and Captions vs. Creators and the Clock: Comparing Multimodal Content to Context in Predicting Relative Popularity

Abstract: The content of today's social media is becoming more and more rich, increasingly mixing text, images, videos, and audio. It is an intriguing research question to model the interplay between these different modes in attracting user attention and engagement. But in order to pursue this study of multimodal content, we must also account for context: timing effects, community preferences, and social factors (e.g., which authors are already popular) also affect the amount of feedback and reaction that social-media posts receive. In this work, we separate out the influence of these non-content factors in several ways. First, we focus on ranking pairs of submissions posted to the same community in quick succession, e.g., within 30 seconds, this framing encourages models to focus on time-agnostic and community-specific content features. Within that setting, we determine the relative performance of author vs. content features. We find that victory usually belongs to "cats and captions," as visual and textual features together tend to outperform identity-based features. Moreover, our experiments show that when considered in isolation, simple unigram text features and deep neural network visual features yield the highest accuracy individually, and that the combination of the two modalities generally leads to the best accuracies overall.

This is joint work with David Mimno and Lillian Lee.

“The AI-Seminar is sponsored by Yahoo!”

March 31st, 2017

Speaker: Xun Huang, Cornell

Host: Serge Belongie

Title: Generative Image Modeling

Abstract: While deep neural networks have made tremendous progress in visual recognition, it remains an open question whether they can exhibit creativity as humans do. In this talk I will present two of our recent papers that make progress towards computer vision models that are able to create novel images. The first problem I will talk about is known as style transfer - creating artworks that combine the content of one image with the style of another. The original approach of Gatys et al. is flexible enough to encode arbitrary styles, but is too slow to be practical. Later advancements have greatly improved the speed, but become restricted to a single style or a small set of fixed styles. We for the first time solve this fundamental speed-flexibility dilemma with a simple adaptive instance normalization (AdaIN) layer [ICCV'17 (under review)]. Our style transfer method is nearly three orders of magnitude faster than the original one, opening the door to real-time applications without sacrificing any flexibility. Next I will focus on the problem of unconstrained image generation, in which the neural network has to generate images from scratch without any "hint" from humans. I will introduce our stacked generative adversarial networks [CVPR'17] that greatly improve the image generation quality with an intuitive stacked architecture. 

Bio: Xun Huang is a first-year PhD student at Cornell University, advised by Prof. Serge Belongie. His research interests include machine learning (especially deep learning) and its applications to computer vision. Before coming to Cornell, he received his bachelor's degree from Beihang University in China in 2016 and worked closely with Prof. Zhuowen Tu at UCSD. He has published papers at all top conferences in computer vision (ICCV, CVPR, ECCV) as first authors.

“The AI-Seminar is sponsored by Yahoo!”

April 7th, 2017

SPRING BREAK- NO SEMINAR

"The AI-Seminar is sponsored by Yahoo!"

April 14th, 2017

Speaker: ACSU LUNCH- NO SEMINAR

“The AI-Seminar is sponsored by Yahoo!”

April 21st, 2017

Speaker: Stefanos Nikolaidis

Host:

Title:

Abstract:

Bio:

“The AI-Seminar is sponsored by Yahoo!”

April 28th, 2017

Speaker: Timnit Gebru

Host:

Title: Using Deep Learning and Google Street View to Estimate the Demographic Makeup of the US

Abstract: Targeted socio-economic policies require an accurate understanding of a country’s demographic makeup. To that end, the United States spends more than 1 billion dollars a year gathering census data such as race, gender, education, occupation and unemployment rates. Compared to the traditional method of collecting surveys across many years which is costly and labor intensive, data-driven, machine learning driven approaches are cheaper and faster—with the potential ability to detect trends in close to real time. In this work, we leverage the ubiquity of Google Street View images and develop a computer vision pipeline to predict income, per capita carbon emission, crime rates and other city attributes from a single source of publicly available visual data. We first detect cars in 50 million images across 200 of the largest US cities and train a model to determine demographic attributes using the detected cars. To facilitate our work, we have collected the largest and most challenging fine-grained dataset reported to date consisting of over 2600 classes of cars comprised of images from Google Street View and other web sources, classified by car experts to account for even the most subtle of visual differences. We use this data to construct the largest scale fine-grained detection system reported to date. Our prediction results correlate well with ground truth income (r=0.82), race, education, voting, sources investigating crime rates, income segregation, per capita carbon emission, and other market research. Finally, we learn interesting relationships between cars and neighborhoods allowing us to perform the first large scale sociological analysis of cities using computer vision techniques.

Bio: I am a PhD student in the Stanford Artificial Intelligence Laboratory, studying computer vision under Fei-Fei Li. My main research interest lies in data mining large scale publicly available images to gain sociological insight, and working on computer vision problems that arise as a result. Some of these include fine-grained image recognition, scalable annotation of images, and domain adaptation. Prior to joining Fei-Fei's lab I worked at Apple designing circuits and signal processing algorithms for various Apple products including the first iPad. I also spent an obligatory year as an entrepreneur (as all Stanford undergrads seem to do). My research is supported by the NSF foundation GRFP fellowship and currently the Stanford DARE fellowship.

“The AI-Seminar is sponsored by Yahoo!”

May 5th, 2017

Speaker:

Title:

Abstract:

 

“The AI-Seminar is sponsored by Yahoo!”

 

See also the AI graduate study brochure.

Please contact any of the faculty below if you'd like to give a talk this semester. We especially encourage graduate students to sign up!

Sponsored by


CS7790, Fall '15

 

Back to CS course websites