Comments on Architectural Considerations for a New Generation of Protocols

David D. Clark and David L. Tennenhouse. Comments on Architectural Considerations for a New Generation of Protocols. In Proceedings of the 1990 SIGCOMM Symposium on Communications Architectures and Protocols, pp. 200-208, September 1990.

Notes by Snorri Gylfason, March 26, 1998


Background

A protocol is a set of rules and formats that two or more peers agree upon to able meaningful communication between them. Normally we have a stack of communication protocols, each providing some specific service.
In the literature it is a custom to use the ISO-OSI model to describe classical protocol layering model. It has seven layers:

Goals and motivation

Future networks will have considerably greater capacity and there is some concern that existing protocols will represent a bottleneck. There has be no consensus on the real source of protocol overhead, this paper tries to analyze the problem.
With new technology and applications we will see new traffic pattern. The future protocols must be flexible enough to allow this to happen.

There is a good reason to design and present protocol architecture using layered model but we must be careful not to require the layering in implementation. Layering in implementation can result in bad performance.

Control vs. Data Manipulation

We can classify the functions of the protocols into two classes, data manipulation and transfer control. Data manipulation functions operates on the data of a packet - for example by copying between application address spaces, error detection, encryption, etc. Transfer control functions operates on the meta-data of a packet - for example to rely packet, do flow control, multiplexing, framing, etc.

Even though some of the transfer control may include some time consuming operations (like table lookup) it is usually much cheaper than the data manipulation which usually must touch every byte of the data. Since the data manipulation costs usually more we should concentrate on reducing the overhead there.

Recall that the data-manipulation is mainly in the presentation layer.

Presentation Processing

In this chapter the authors points out that the presentation conversion must by done in context of the application. Sometimes the application can handle packet loss or mis-ordering of packets better than the transport layer. For instance in a file-transfer application mis-ordering does not matter if the application knows of it, and lost packets should not add delay for the packets following.

The only problem is the presentation layer in-between the transport layer and the application. If for example we are transferring a file and one packet is lost. If the data in the packets is encrypted we might not be able to do the decryption of the following packets until we get the lost packet. The authors introduce a solution to this, Application Data Units (ADU), a logical unit of data that the application can do presentation conversion on independent on other ADUs.

Integrated Layer Processing

Layering has been proven useful in architectural modeling, to isolate necessary functionality and message semantics. A naive implementation includes the same layering and in the same order, which sometimes result in poor performance. Unfortunately existing protocol suites often impose ordering constraints that limit the possibilities in the implementation.

Discussion points