The Structure of Information Networks

Computer Science 6850
Cornell University
Fall 2008

  • Instructor: Jon Kleinberg

  • Time: MWF 1:25-2:15 pm.

  • Place: 109 Upson Hall.

  • Office Hours: Friday, 11:00 am - 12 noon, 5134 Upson Hall.

  • http://www.cs.cornell.edu/courses/cs6850/2008fa/

    Overview

    The past decade has seen a convergence of social and technological networks, with systems such as the World Wide Web characterized by the interplay between rich information content, the millions of individuals and organizations who create it, and the technology that supports it. This course covers recent research on the structure and analysis of such networks, and on models that abstract their basic properties. Topics include combinatorial and probabilistic techniques for link analysis, centralized and decentralized search algorithms, network models based on random graphs, and connections with work in the social sciences.

    The course prerequisites include introductory-level background in algorithms, graphs, probability, and linear algebra.

    The work for the course will consist primarily of two problem sets, a short reaction paper, and a more substantial project.

    Handouts

    Course Outline

    (1) Complex Networks and the Web

    Several things laid the groundwork for the material in this course, but two stand out in particular: the increasing availability of network data across technological, social, and biological domains; and the rise of the Web as a central object of study in computer science. We begin by surveying this background material, previewing some of the network properties we'll be studying and their consequences for our understanding of large-scale social and information networks.

    (2) Small-World Properties in Networks

    A major goal of the course is to illustrate how networks across a variety of domains exhibit common structure at a qualitative level. One area in which this arises is in the study of `small-world properties' in networks: many large networks have short paths between most pairs of nodes, even though they are highly clustered at a local level, and they are searchable in the sense that one can navigate to specified target nodes without global knowledge. These properties turn out to provide insight into the structure of large-scale social networks, and, in a different direction, to have applications to the design of decentralized peer-to-peer systems.

    (3) Decentralized Search in Peer-to-Peer Networks

    Decentralized search has been applied to the problem of sharing files in a peer-to-peer network without a global index. Each host in the system holds a subset of the content, and requests must be routed to the appropriate host in a decentralized fashion. As in the case of the small-world problem, the goal is a network that is easily searchable; but how should a distributed protocol shape the network topology so as to attain this goal?

    (4) Cascading Behavior in Networks

    We can think of a network as a large circulatory system, through which information continuously flows. This diffusion of information can happen rapidly or slowly; it can be disastrous -- as in a panic or cascading failure -- or beneficial -- as in the spread of an innovation. Work in several areas has proposed models for such processes, and investigated when a network is more or less susceptible to their spread. This type of diffusion or cascade process can also be used as a design principle for network protocols. This leads to the idea of epidemic algorithms, also called gossip-based algorithms, in which information is propagated through a collection of distributed computing hosts, typically using some form of randomization.

    (5) Power-Law Distributions

    The degree of a node in a network is the number of neighbors it has. For many large networks -- including the Web, the Internet, collaboration networks, and semantic networks -- the fraction of nodes with very high degrees is much larger than one would expect based on ``standard'' models of random graphs. The particular form of the distribution --- the fraction of nodes with degree d decays like d to some fixed power --- is called a power law. What processes are capable of generating such power laws, and why should they be ubiquitous in large networks? The investigation of these questions suggests that power laws are just one reflection of the local and global processes driving the evolution of these networks.

    (6) Economic Models for Behavior in Networks

    In order to model the interaction of agents in a large network, it often makes sense to assume that their behavior is strategic -- that each agent operates so as to optimize his/her/its own self-interest. The study of such systems involves issues at the interface of algorithms and game theory. A central definition here is that of a Nash equilibrium -- a state of the network from which no agent has an incentive to deviate -- and recent work has studied how well a system operates when it is in a Nash equilibrium.

    (7) Link Analysis for Web search

    Link structure can be a powerful source of information about the underlying content in the network. In the context of the Web, we can try to identify high-quality information resources from the way in which other pages link to them; this idea has reflections in the analysis of citation data to find influential journals, and in the analysis of social networks to find important members of a community. From a methodological point of view, current approaches to link analysis on the Web make extensive of methods based on eigenvalues and eigenvectors.

    (8) Spectral Analysis

    The previous discussion of link analysis provides one glimpse into the power of spectral analysis for networks: using information obtained from the eigenvalues and eigenvectors of their adjacency matrices. This is also a powerful technique in data analysis more generally. However, the connection between spectral parameters and the more combinatorial properties of networks and datasets is a subtle issue, and while many results have been established about this connection, it is still not fully understood.

    (9) The Time Axis

    Information networks are highly dynamic, but it is often hard to form a good picture of how they are evolving along their ``time axis.'' Temporal change spans many orders of magnitude, from the second-by-second dynamics of usage data to the year-by-year shifts in the global structure.

    (10) Clustering, Classification, and Community Structures

    Clustering is one of the oldest and most well-established problems in data analysis; in the context of networks, it can be used to search for densely connected communities. A number of techniques have been applied to this problem, including combinatorial and spectral methods. A task closely related to clustering is the problem of classifying the nodes of a network using a known set of labels. For example, suppose we wanted to classify Web pages into topic categories. Automated text analysis can give us an estimate of the topic of each page; but we also suspect that pages have some tendency to be similar to neighboring pages in the link structure. How should we combine these two sources of evidence? A number of probabilistic frameworks are useful for this task, including the formalism of Markov random fields, which -- for quite different applications -- has been extensively studied in computer vision.

    Network Datasets

    There are a number of interesting network datasets available on the Web; they form a valuable resource for trying out algorithms and models across a range of settings.

    Lectures

    Here is a list of the main topics for each lecture. The topics are based on a mixture of content from papers in the outline above; a paper (in many cases a survey) providing the conceptual starting point for each lecture is given as part of the list.