The Readers/Writers problem is a classic mutual exclusion synchronization problem to which there are many solutions. The problem occurs when there are multiple readers and writers trying to access the same shared data structure where writers require mutual exclusion to ensure data consistency. An extremely simple solution to this problem is to treat readers and writers equally and to simply lock the shared data structure each time anyone wishes to access it. This is a poor solution since readers do not require mutual exclusion and can access the shared data concurrently. From Silberschatz and Galvin, we know that a good solution to this problem must satisfy mutual exclusion of the writers, progress, and bounded waiting. Yet, in certain environments, the bounded waiting time requirement may be ignored in order to maximize throughput (i.e. prefer readers), or to ensure the most up to date data is available (i.e. prefer writers).
This assignment is an extension of the last assignment. The code base
is very similar, but a few updates and changes have been made. The
grid is still user definable of size n x n, and the only
operations on the grid are to swap 2 elements and to read the grid.
Every thread is either a reader or a writer executing
readGrid(int sleepTime)
and writeGrid(int
sleepTime)
respectively. The writers swap elements, and
the readers read the grid and verify that the sum is unchanged. The
parameter sleepTime is the number of seconds the reader or
writer will sleep while accessing the grid.
There are now new granularities:
The type of thread (reader or writer), the lock duration, and the
creation order of the readers and writers is defined in a control file
which is specified at the command line. The format of the file is
very specific. Each line consists of: R or W
R 10 0 W 3 6 W 2 1 R 0 5
This creates a reader and writer immediately. The reader sleeps for
10 seconds inside readGrid(int)
, and the
writer sleeps for 3 seconds inside
writeGrid(int)
.. 6 seconds after the first
writer is created, the second writer is created; it will sleep for
2 seconds inside writeGrid(int)
. Finally
1 second after the second writer is created, the second
reader is created, and it will not sleep in readGrid(int)
.
readwrite gridsize control_filename
-granularity
For example:
Readwrite 10 PREFR -fair
The only working granularity in the given code is -none. It provides a floor for performance measurements. The -exclusive granularity provides a ceiling for performance measurements. You will need to implement all of the granularities and create 3 control files each demonstrating a case where -preferReaders, -preferWriters, and -fair perform best in terms of minimum execution time. Depending on your actual implementation of each algorithm, it may not be possible in each case. If it is not, argue why not in your write-up. The files must create at least 5 readers and 5 writers.
NOTE: Name your final control files PREFR, PREFW, and FAIR
NOTE: Your grid should be a reasonable size (at least 5x5) for these tests.
A single gzipped tar file containing the following files:
You may do this assignment individually or in groups of two. If you work alone, you should use the pthread interface. If you work in a team, you should write a single program that can be compile to run either on Linux or on Windows. In either case, you should program in C. In Windows, you should use the Win32 API not use pthreads :-) (Less portable I know but the point is to give you a different experience with Windows primitives like CreateThread, WaitForSingleObject, WaitForMultipleObject, etc.)
There will be a contest to see whose readers/Writers implementation is the fastest.
Compute average waiting time as well as total execution time for all runs.
Make an animation of graph that show the degree of concurrency of the reader and writer threads. For example, you could have time on the x axis and thread number on the y axis. For each thread, you could show its arrival time, start time and end time. Starttime - arrival time would show its waiting time. If you took, any slice through the graph you could see the degree of concurrency by seeing how many threads are currently between startTime and endTime.