Notes for instructors teaching from my slide set:
First: A meta-comment. I teach this course in two ways and am developing two slide sets. As an instructor working from my materials, you'll need to decide which approach would work better for you. Most instructors would probably be best off opting for the "other" slide set, which should be posted to this site by mid summer of 2005.
They differ in the following respect: This set deviates from the order of material in the book and pulls in material not covered in the book, while the other slide set follows the book quite closely. Although the content of the set is similar and in fact many slides are identical, the order of presentation is pretty much chapter by chapter, section by section through the book, skipping some topics that the book covers in more detail, but not including much material completely missing from the book. References for this extra material can be found on the "topics" page.
This slide set, to restate my point, corresponds to a somewhat tougher course (both from a student's perspective and from the instructor's perspective too!) Not only is the lecturer expected to bring his or her own extra perspective (beyond the book) to the table, but some lectures are also tough in pacing: they require skipping quickly over lots of material in order to save time to drill down on one topic or another that will demand more intellectual work. Sometimes they do this in ways that you may find unreasonable. You'll find that when I give lectures, I sometimes flash a slide with text on it but say very little about it and then move on. I do this because some slides just sum up what we just said and are there for student notes but not really to be read line by line. If you DO try and read such a slide line by line you'll feel that the content is repetitious and you'll run out of time.
An example: When tackling the question of consistency in Lecture 09 (have a look), the slide set spends perhaps 5 slides on the Air Force JBI system. The reason is to give the students two insights: big systems are VERY big. They have all sorts of legacy components that are basically just bolted in with adapters of some form. And for these reasons, they can be almost incomprehensible. Yet organizations like the Air Force have started to buy into SOA/SOS concepts very seriously and are building such systems. Indeed, for students entering the workforce today, these are the kinds of settings in which the most job opportunities are emerging.
Another example: the first five slide sets spend a lot of time discussing the architecture of a typical large data center, like you might find at Amazon.com. But you can't find research papers on such systems or even good-quality "best practice" papers about them. I've learned by talking to people who build and operate them, like Amazon's Werner Vogels (CTO) and Jacob Gabrielson (who previously held a similar title). But nobody seems to write this kind of thing down. I have no idea if other lecturers will be able to present this material, but for me it works well.
If you do, expect that when teaching this, you'll be faced with slides literally buried in acronyms. Those Air Force JBI slides are full of weird military subsystem names. I don't have any idea what most of them are. And clearly if you try to present that sort of slide slowly you'll face big problems unless you do it the way I do here at Cornell: by emphasizing that the slide is on the screen as a mile-up glimpse of 'your life next year if you take a job working on a big data center -- whether at Citigroup, or eBay, or Lockheed Martin. Big "systems of systems" are the story these days. The people building them rarely know precisely how the subsystems work and often don't even know what some subsystems do.
One consequence is that to cover these early lectures in 75 minutes you really need to blast through the first 30 slides or so (for example, the JBI slides I just mentioned get about 30 seconds each when I cover that part of the slide set). Otherwise you won't have time for the real core of the lecture, which relates to the idea that we need to pin down what consistency means, that it often means replication, and that some forms of consistency just can't be achieved.
Why do I use this slide set, given the challenges of timing and pace and content? Part of the reason is that I wrote the book, and teaching directly from it in the identical order is sort of dull for me. The students feel as if I am reading it to them. A second reason is that to keep the length sane (and hence the price down), I had to work with a limited page budget. As a consequence the book does omit some topics, like checkpointing and rollback, simply for lack of pages. Scalable architectures for cluster computing in data centers is a topic omitted for lack of a mature perspective: This is happening, but it is too early to know how to really teach the best practice solutions, hence the book omits the topic for now. Yet these still represent useful material to at least touch upon, and create a nice context for me as a lecturer to give examples of things students might need to do -- motivation for learning how to do them.
I can pull this off; after all, these are my slides. But you may find it harder to work with this slide set than with the other one, since the other slides parallel the book so closely. If you feel slightly uncertain about the topic, please use the other set of slides! Alternatively, consider starting with my slides but then replacing some of the material with material of your own that can serve similar purposes but will feel very natural to you.
Not only the content but also the ordering of topics in this slide set can pose challenges. For example: I find that if I do a linear development of replication, I need to devote five weeks or so to the topic. I used to do this but students complained that the subject felt remote from industry practice. Yet the material matters. So, to avoid this loss of connection to real systems, I made a decision to interleave real-world material with deeper topics, as a series of top-to-bottom traversals of the Web Services concept.
Thus I start at the top and dive down to TCP (lecture 2). Then we start back at the top, and dive down to naming and discovery issues, leading to the suggestion (by Balikrishnan) that a new set of standards may be needed in Web Services. Next, we do it again with a focus on performance optimization and tuning (a topic not covered adequately in the book, due to length concerns). Then back to the top and down to consistency issues. Then on to the transactional model, one of the main "stories" currently available for people who want trustworthy, consistent, systems. Then agreement protocols, notably 2PC. A brief detour to show how these ideas can be used in real systems. Then back down to do replication. Then we return to real systems to show how replication can be used.
For me, this works -- in fact it works very well. The key to it is that not only do I know the overall story ("wherever you start, you'll hit some tough problems that go beyond the edge of CORBA or Web Services... and we can learn solutions and learn to apply them in your applications when you need to do so") but also, I share the story with the students repeatedly, to help them understand what we are doing: exploring a very rich problem/solution space a chunk at a time.
But if you plan to use this slide set, you need to feel good about that story too. You need to point out to the students that you are doing this practical-need to fundamental mechanism and then back to see it applied in the practical setting traversal, again and again. Otherwise some students will lose their sense of orientation, and will just find this confusing and complain that you jump around (they don't complain when I do it, because they understand what I am doing). You'll see that many of my slide sets have a "recap" in them, and some do this repeatedly. This is why I do so.
So this slide set may not work for everyone. If you feel concerned and prefer to stick to the development ordering seen in the textbook, use the second set.
As noted, that second set should be up by mid summer of 2005.
In this first lecture, your goal is simply to touch upon the major themes of the course. When teaching I tend to go rapidly but lightly over the Web Services material, and to emphasize that CORBA is an equally important services oriented architecture (SOA), both used to create complex "systems of systems" (SoS) and data centers.
Lecture 2 presents some of the major pieces of the Web Services architecture. In my Cornell course I just don't have time to delve deeply into these or to cover CORBA even at the same superficial level. But this means that lecture 1 doesn't need to get at all detailed about Web Services or CORBA.
Thus in Lecture 1, your emphasis needs to be on the fundamental question posed in the second half of the material, where I describe two ways of building an air traffic control system and ask what issues one confronts.
IBM invented a whole new way of programming: componentized, with a scheme for fault-tolerance component replication. But the methodology was a failure, perhaps because programmers found it unnatural, or perhaps because their multicast was very, very slow, or perhaps because they required deterministic components. We'll touch on that issue again in Lecture 5. In a later echo of the same error, CORBA replicates objects in its fault-tolerance specification... and the industry ultimately didn't accept that standard. Component replication is just a failure.
In contrast the French system replicates data -- not entire components. A normal looking event-styled program can "join" one or more groups associated with some replicated object, like "status of airplanes in section D-6", and those group members receive the data (state transfer) as of when they join and then see subsequent updates (atomic multicast). This was a success: a natural programming model, and also a high performance multicast.
It would be good to spend time on this point. It may not be obvious to a student that we can replicate a whole program (state machine approach) or we can replicate a variable (process group or publish-subscribe approach), so that programs that are not "replicas" of one-another own replicas of that specific variable. This is an easy idea once you explain it, but it does require explanation. It represents a non-trivial fork in the road -- State Machines, Paxos, and Consensus all picked the component replication approach, and yet we know it was unsuccessful in the most dramatic attempt to use it, to date -- $6B was lost on the project. Moreover, you end up with unreasonably slow solutions.
Replicated data, in contrast, can be blindingly fast (literally tens or hundreds of thousands of updates per second, if the code is heavily tuned). And the models mentioned, like publish-subscribe or process groups, turn out to be remarkably natural and easy to use. This is a big deal, and projects that went down this path have been very successful. Not in the sense of transactions -- you don't find these mechanisms everywhere you look. But they are common in commercial platforms and systems like IBM Websphere are adopting them.
But notice that not just any replication scheme will suffice. The remainder of the lecture talks about the consistency needs we encounter in these systems, focusing on a primary-backup server scenario (also a form of replication) in which a split-brain problem can arise: the two servers behave inconsistency. Clearly we must find ways to do replication and fault-tolerance immune to such problems!
At Cornell, I deliver this lecture in about 65 minutes. Our lecture slots are 75 minutes and I use the last 10 to allow students to form groups for work on the course project.