Horus Objects



next up previous
Next: Common Protocol Interface Up: A Framework for Protocol Previous: Classes of Protocols

Horus Objects

We noted the need to standardize the abstractions used by protocol modules, as well as their interfaces. Among other objects outside the scope of this paper, Horus provides four classes of objects: endpoints, groups, messages, and threads. None of these objects, not even the group and message objects, are distributed objects. That is, these objects only contain state local to the process (or processor) that owns them. (Of course, they may be used to implement distributed objects.)

The endpoint object models the communicating entity. An endpoint has an address, and can send and receive messages. As we will see later, messages are not addressed to endpoints, but to groups. The endpoint address is used for membership purposes. A process may have multiple endpoints, each with its own stack of protocols.

Although a single layer may be used concurrently by many groups and many endpoints in the same process, each instance has its own state. The group object maintains this state on a per-endpoint basis. Associated with each group object is the group address to which messages are sent; a view, which is a list of endpoint addresses which represent the members of the group; and such additional information as may be needed by the layers stacked by the member that owns the endpoint. Locking mechanisms, described below, protect the group object against concurrent access, for example when threads in an application issue concurrent sends to the same group object. Since a group object is purely local, Horus allows different endpoints to have different views of the same group. Note that we use the term ``group'' to mean the set of members that communicate using a common group address, whereas the ``group object'' is a data structure local to each member, and associated with that member's communication endpoint.

The message object is a local storage structure optimized for its purpose. Its interface includes operations to push and pop protocol headers, much like a stack. This should be expected, because headers are added as message objects travel down the protocol stack in the case of sending, and are removed as they travel up in the case of delivery. The message object that is sent is different from the message object that is delivered, although, in most cases, they will contain the same data. A message object can contain pointers to data located in the address space of the application, the operating system, or even a device interface; this permits Horus to pass messages up and down a stack with no copying of the data that the message will actually transport.

All objects discussed so far maintain state only. Horus also provides thread objects, which perform computations. Horus threads are not bound to a particular endpoint, group, or message object, although a thread will often deal with at most one of each. A process typically contains multiple threads, which come into existence in a variety of ways. For example, a thread can be explicitly created by another thread, or may be created by Horus to handle an arriving message or some other event such as timer expiration. Threads execute concurrently and pre-emptively, using mutual exclusion to protect critical regions. Thread priorities are supported, but this raised many problems (starvation, priority inversion) and their use is discouraged.

The threaded architecture of Horus enhances performance (through increased concurrency) and simplicity (through increased code modularity). However, locking is also a source of bugs in layers developed by inexperienced thread users. This has led us to offer two very simple alternatives to standard critical sections. The first of these treats a layer as a monitor, allowing only one thread at a time to be active for each group object. The second is based on event counters, and provides a way to order threads according to an integer sequencing value: each upcall is assigned a sequence number, and threads are provided with mutual exclusion zones that will be entered in sequence order.

We have also explored a non-threaded approach based on an event queue model. This model associates queues of invocation parameters with each entry point to a layer. Rather than using a procedure call to invoke a layer, a new event is put on that layer's event queue. Each layer is then implemented with a single scheduling thread per endpoint, which is responsible for selecting (scheduling) an event to dequeue, and then for executing the required code. We find that this leads to much simplified code and reduced storage overhead (the stacks used by threads are much smaller).



next up previous
Next: Common Protocol Interface Up: A Framework for Protocol Previous: Classes of Protocols



Robbert VanRenesse
Mon May 15 12:16:43 EDT 1995