next up previous contents
Next: Implementing view changes Up: Advanced Concepts Previous: Avoiding deadlocks using synchronous

   
Using deadlock detection with the RequestCorrelator

The RequestCorrelator class allows to give higher preference to circular requests that would cause deadlock, thus breaking potential deadlock situations. Circular requests are chains of synchronous requests where a sender occurs more than once in the chain, e.g. a synchronous request sent from P to Q which, upon reception, sends a synchronous request to R, which in turn sends a synchronous request to P would form the call chain P $\rightarrow$ Q $\rightarrow$ R $\rightarrow$ P. When P receives the request, it cannot be processed because P sent the request and is waiting for Q's response, which is waiting for R's response, which in turn is waiting for P's response. The circularity causes a deadlock. This is shown in fig. 5.35.2.


  
Figure 5.3: Circular deadlock situation
\begin{figure}
\center{\epsfig{file=/home/bba/JavaGroups/Papers/UsersGuide/figs/CircularDeadlock.eps,width=.45\textwidth} }
\end{figure}

Caused by the reception of another message (or an event) m1, P sends m2 to Q, which sends m3 to R, which sends m4 to P, causing a deadlock. However, we can see that, if message m1 was preempted by m4, that is if m1 was suspended temporarily and m4 was processed instead, m4 would return (shown by dotted lines), which would cause m3, m2 and m1 to return in turn, thus avoiding deadlock. There are two problems involved with this scheme: first, we have to detect when to 'prioritize' a request and second, we have to make sure that ordering guarantees given by the stack are not violated by this prioritizing scheme.

The first problem is solved by tagging synchronous requests with their call chain (also called call stack): when a synchronous request is sent, the sender's address is pushed onto a call chain stack in the header generated by the RequestCorrelator, and the header added to the message. In the example above, when m1 is received by Q, the call chain would be P5.3. Q would then add its own address to the call chain of m2, so that m2's call chain would be P $\rightarrow$ Q and so on. When P finally receives m4, m4's call chain would be P $\rightarrow$ Q $\rightarrow$ R $\rightarrow$ P. On every reception of a synchronous request, the receiver checks whether its own address is already contained in the call chain. If this is the case, the request is prioritized, that is, added to the head of the request queuee, otherwise it is just added to the tail. Thus, when P receives m4, m1 will be suspended, m4 processed and a response returned. Then the processing of m1 will resume. This scheme allows to detect direct or indirect circularity in synchronous call chains, and to avoid deadlock by selective prioritization of requests5.4.

The second problem requires examination of ordering properties. Ordering properties, such as FIFO or total order, guarantee that, if m1 was sent before m4, then m1 should be delivered before m4. However, in our case, there are two solutions: the definition of 'delivery' which, in its strongest form, requires a request to return to complete delivery, could be weakened to one which excludes returning from the request. The choice of definition is up to the application, some applications need strong ordering, whereas for others the weaker definition is sufficient. A stronger cause can be made when considering call chains: a call chain maintains causality for the requests contained in it. Since P sent m1, which indirectly caused m4, m4 can without harm interrupt m1 at P and return before m1. This can be compared to a purely local execution: processing of m1 would start before processing of m4, but m4 would 'pass' m1 and return first, causing the stack to unwind. Note that causality maintained in call chains is not a potential one5.5, but a real one: a call chain is only created when - during the processing of a synchronous request - another synchronous request is sent to a different member, therefore directly relating the previous request to the next one.

The RequestCorrelator can be created with or without deadlock detection (default is without): there is a constructor that allows to set deadlock detection, or method SetDeadlockDetection can be used alternatively. When deadlock detection is off, no call chains will be added to synchronous requests, therefore neither procesing cycles nor memory is wasted. When it is on, only synchronous requests will be tagged with call chains by the RequestCorrelator, and prioritization may be used for requests causing potential deadlocks.


next up previous contents
Next: Implementing view changes Up: Advanced Concepts Previous: Avoiding deadlocks using synchronous

1999-12-13