Notes on

"Using Threads in Interactive Systems: A Case Study"

by Alexander Castro

Objective of Paper:

 Identify common paradigm and common mistakes when programming with threads. Results of examining two large research and commercial systems (Cedar and GVX).

Outline of Paper

  1. Introduction
  2. Thread Model
  3. Dynamic Thread Behavior
  4. Thread Paradigms
      1. Defer Work
      2. Pumps
      3. Sleeps and One-Shots
      4. Deadlock Avoiders
      5. Task Rebirth
      6. Serializers
      7. Concurrency
      8. Encapsulated Forks
  1. Thread Use Issues
      1. Easy Uses
      2. Hard Uses
      3. Common Mistakes
      4. Fork Failures
      5. Robustness
  1. Thread Implementation Issues
  1. Conclusion

Methods of Analysis:

    1. Macroscopic thread statistics
    2. Microsecond spacing between thread events
    3. Reading implementation code

Methods 1., and 2., made use of tools developed by the authors for thread inspection(dynamic analysis). Method 3., involved reading lines of code from the two thread-based systems (static analysis).

Thread Model:

Multiple, lightweight, pre-emptive, shared address space

Monitor and Condition Variables

Condition Variables (CVs)

Dynamic Thread Behavior

The results of the dynamic analysis lead to the following conclusion: there exists a number of consistent patterns of thread use. This was followed by a static analysis, which revealed 10 paradigms of thread use (much of the paper was focused on the results of this static analysis).

10 Paradigms of Thread Use:

    1. Defer Work
    2. General Pumps
    3. Slack Processes
    4. Sleepers
    5. One-Shots
    6. Deadlock Avoidance
    7. Rejuvenation
    8. Serializers
    9. Encapsulated Fork
    10. Exploiting Parallelism

Thread Paradigms

1. Defer Work

Most common use of threads. Used to reduce latency seen by clients by forking a thread to do work not required for the procedure’s return value. Work can also be delayed until the system is under less load (Introduction to some basic uses of threads [Birrel91]).

 Examples of work deferrers:

    1. forking to print a documents
    2. forking to send mail message
    3. forking to create new window
    4. forking to update contents of a window

Some threads are so critical to system responsiveness that they fork to defer almost any work beyond noticing what work needs to be done [acting like interupt handlers]. Fork threads at lower priority and allow higher priority critical thread to respond to the next event. Example of such a critical thread is, keyboard-and-mouse watching process (notifier).

2. Pumps

Pipeline components. Pick up input from one place, do some work on it, return it as output somewhere else.

Bounded buffers(occurs in Cedar and GVX for connecting threads together) and external devices are two common sources and sinks[accessed w/ system calls (read,write) and shared memory (raw screen IO & memory shared w/ an external X Server) ].

Although Birrell suggests creating pipelines to exploit parallelism on a multiprocessor, most commonly used in Cedar & GVX to simplify program structure.

 3. Sleepers & One Shots

 Sleepers are processes that repeatedly wait for a triggering event (such as a timeout) and then execute. Examples include:

  1. Call this procedure every K seconds
  2. Blink the cursor in M milliseconds
  3. Check for network connection timeout in every T seconds
  4. Events are external input and service callbacks from other activities

Sleepers usually don’t do much work before sleeping again (i.e. cache manager, see paper for details, page 99).

One shots are sleeper processes that sleep for a while then go away (i.e. guarded buttons in Cedar, see paper for details).

4. Deadlock Avoiders

Cedar often forks to avoid violating lock order constraints. Examples include:

  1. When adjusting boundries between two windows, forking to repaint windows.
  2. Forking callbacks from a service module to a client module.

5. Task Rejuvenation

Sometimes threads fail, and recovery is impossible within the thread itself. Task rejuvenation threads are forked for cleanup and recovery.

Task rejuvenation is controversial paradigm. It’s ability to mask underlying design problems suggests that it be used with caution.

* The same can be true of any number of coding conventions (i.e. checking HRESULT)?

6. Serializers

A serializer is a Queue + Thread that processes work in that queue. An example of a serializer is in the window system where inputs can arrive from a number of different sources. A single thread handles the queue in order to preserve input ordering. This method is used in the Macintosh, Windows, and X programming models.

7. Concurrency Exploiters

Concurrency exploiters are threads created specifically to make use of multiple processors. Not much to talk about here.

* What are some of the issues here? Are they the same as when making use of uniprocessors?

8. Encapsulated Forks

Make thread paradigms easier to use by encapsulating paradigms in code modules.

  1. DelayedFork (expresses one-shot paradigm)
  2. PeriodicalFork (DelayedFork that repeats at fixed intervals)
  3. MBQueue (Menu/Button Queue, encapsulates serializer paradigm)

Note: In this study, threads that may switch paradigms are counted only once.

 

Issues in Thread Use 

Must weigh the costs of threads versus the benefits of structural simplicity and concurrency.

The costs of using threads:

  1. Cost of creating thread.
  2. Virtual memory occupied by thread stack (maybe inefficient if little thread state, Draves91).

1. Easy thread uses

Some threads are simpler to use, and therefore, are used often. Such threads include, the sleeper, one-shot, work deferers, and pumps (w/o critical timing constraints). Deadlock avoiders are themselves simple, but the locking schemes they are involved in, may be very complex.

2. Hard Thread Uses

  1. Little info available on making use of concurrency exploiters.
  2. Slacks, and pumps (with timing constraints).

3. Using a strict priority scheduler is not always desirable (see paper for details, page 104).

  1. The time-slice quantum can have significant effects on the performance on a interactive thread system, hence careful thought should be given towards choosing the time-slice.

4. When a fork fails

A fork may fail due to lack of resources. The question here is, how does one recover from a resource allocation failure. One possible solution is to catch the error, and recover. In practice, this technique proves to be too complex, and doesn’t adequately recover after catching the error.

Another technique is too wait in the thread until enough resources are available for allocation. However, this results in delays in response (or complete unresponsiveness), that are not explained.

Robustness in a changing environments:

  1. Timeout values calibrated to processor speeds is vulnerable to processor obcelescence. One possible area of future research is, dynamically tuning application timeout values based on end-to-end system performance.
  2. Thread correctness dependent on strong memory ordering?

Issues in thread implementation:

  1. Spurious Lock Conflict, described by Birrell, results in useless trips through the scheduler made by the notifyee’s processor [Birrell91]. This problem was observed in uniprocessors as well, occurring when the waiting thread has a higher priority than the notifying thread.
  2. Common Mistakes, Looking at the code for the two systems studied, revealed problems with correctness and performance.

* People are probably used to Hoare’s original monitors.