This is another lecture where the meat is at the tail end and you need to go quickly for the first half or you just won't get there.

Start with a rapid recap: what we're doing is a series of top to bottom traversals of SOA architectures with the aim of understanding fundamental questions not treated (yet) in these standards.  As we identify them we're putting them to the side but will revisit them in coming lectures.  I do this fast, maybe 45 seconds per slide at the front end.

So far we've identified several such questions: naming and discovery, replication, load-balancing

But making a system "trustworthy" requires more than just a set of mechanisms.  It demands a new mindset.

So we have a glimpse at an ambitious US Air Force vision for the network of the future -- the Joint Battlespace Infospshere or JBI (don't try to make sense of the slides; they aren't intended for this use and can't be explained in detail; few people could possibly guess what half the acronyms mean.  The key is that each 6-letter acronym is actually some massive and massively complex system and they have been grafted together using Web Services as glue, or at least this is the goal).

Note the tremendous similarity to the SOA concepts.  Almost as if AFRL was thinking "what if we use Web Services in military settings?".  Guess what?  They did!

But notice second that these will be BIG systems with BIG components in them.  It would be far too easy (and totally unrealistic) to say "we need a way to replicate such and such a service and then the system will be trustworthy".   To arrive at a trustworthy solution we need to think about the properties that a trustworthy system would exhibit!

In fact: we'll build new services using new ideas and somehow glue them to old applications that we have little control over; they will be hard to understand, poorly documented, and may have annoying behavior, like running batch style on an old IBM 370 MVS operating system...  We're forced to wrap them with new code that sort of insulates the Web Services client from the peculiarities of the subsystem and superimposes some forms of trust on the underlying substrate.

The lecture expands on this set of basic ideas and narrows into a focus on consistency.  I spend 25 minutes on the first half of the lecture, so I'm really going quickly on the first 30 slides or so!  You may want to trim or revise that part of the slide set to accomplish the same thing in your own style and words.

In the second part of the lecture first we see that to pose such questions we need a model.  Two are offered; we'll mostly use the asynchronous one.  Make sure students understand that we're asking "how should a system be described" not "how should it be implemented".

In these models we can also ask about fault models.

But while it is easy to combine them and say, e.g., I want to build an asynchronous service tolerant of crash failures and message loss it may not be possible to solve every imaginable question.

So we pose a question: Sam and Jill want to eat outside. We pose it in a way that happens to require learning a common-knowledge fact. This can't be done... the couple will probably come to a bad end.

Yet with a different formal goal we might have succeeded.  For example, by accepting some small risk that confusion will occur and they won't meet up.  Consider a telephone call: why does it work?  What happens with a cell phone if you lose your connection just as you are agreeing on something?

Take-aways?  First, big systems get "properties" from new code grafted to old code.  Second, that new code needs to behave consistently or the big system will experience annoying outages and disruptions.  Third, consistency is an easy word to toss around but if we want code to achieve a property we need to formalize that property.  Fourth: people have done this in more than one way/model.  Fifth: our usual model is the asynchronous one, although we sometimes layer time back in.  Sixth, in this model, some problems can be posed but simply can't be solved.  So we need to be careful what we ask for, or we just won't get it.