Midterm    Solutions

 

1a)

            The priority-based scheduling algorithm could starve a low priority process by not scheduling it as long as there is a continuous stream of high priority process coming as the currently scheduled high priority process terminates.  One way to solve this problem is to use aging that increases the priority of a waiting process as a function depending on the waiting time.

 

1b)       The given algorithm does not satisfy mutual exclusion.  Consider the following scenario:

                        Process 0 arrives. Since flag[1] is false gets to the critical region.

                        Process 1 arrives before process 0 leaves critical region. Since it sets turn = 1, process 1 enters critical region.

            Thus both process 0 and process 1 are in critical section violating mutual exclusion.  However, note that it is NOT TRUE that every process that arrives always gets into the critical section without waiting.  For example, if process 0 arrives and sets flag[0] to true and turn to 0.  Before it continues process 1 arrives sets flag[1] to true and turn to 1 and gets to the critical section.  Now if process 0 continues, it has to wait at the while loop because turn is 1.

 

            The given algorithm satisfies progress.  If there is only one process, say process i, in the system, then flag[j] is false and hence process i does not have to wait. If both processes are in the system, then turn equals either 1 or 0 and hence one of the processes can always enter the critical section.  Thus the case of processes waiting at the while loop forever never arises.

 

            The given algorithm does not satisfy bounded waiting in a single-processor system.  Suppose process 0 is waiting for process 1 to finish.  It is true that  when process 1 completes it sets flag[1] to false.  But before process 0 is scheduled again, another process 1 could be scheduled and since it can enter the critical section it could deny the chance for process 0.

 

1c)       The following is a monitor based solution for the producer consumer problem.

 

            Monitor PandC {

                        Condition Variables: full, empty;

                        Buffer buffer;

 

                        Deposit (item) {

                                    if (buffer.isfull()) {

                                                full.wait();

                                    }

                                    buffer.insert (item);

                                    empty.signal();

                           } /* end of Deposit */

 

                        Retrieve (item) {

                                    if (buffer.isempty()) {

                                                empty.wait();

                                    }

                                  buffer.remove (item);

                                  full.signal();

                        } /* end of Retrieve */

}

 

 

Define pc: Monitor PandC;

 

Producer does: 

       while (true)  {

       produce a new item;

       pc.Deposit(item);

        }

 

 

Consumer does: 

       while (true) {

       pc.Retrieve(item);

       consume the item retrieved;

        }

 

2a)

            Consider two accounts #1 and #2, and assume that one client is trying to transfer from #1 to #2 and another from #2 to #1. Its possible that client 1 locks #1 and waits for #2, while client 2 locks #2 and waits for #1.  This causes a deadlock.  

            A simple way to avoid this problem is to make the clients always lock the lower numbered account before the higher numbered account. This invalidates circular wait, one of the four necessary conditions for deadlock. Such a scheme would prevent deadlocks because a deadlock cycle as shown above could never arise.  Since each process locks a lowered number account and waits for a higher numbered one, then at some point in the deadlock cycle a process has to wait for a lowered number account holding a higher numbered one which would never happen.

 

2b)

            Since only one process can access an account at a time, this is similar to the shared resources situation with one copy of each resource.  Thus, we could build a wait-for graph, and a cycle in that graph would indicate a deadlock.

 

2c)

            If a system is in an unsafe state, then deadlock is not bound to happen. A safe state is one where a deadlock can never happen, and an unsafe state is one in which there is a possibility of deadlock happening.  For example, suppose a system is in an unsafe state, then all processes currently holding resources could release them. Subsequently, each process could ask for all its resources at once and release them after use. If this happens iteratively for one process after another, no deadlock arises. This could happen even if the process in an unsafe but not yet in a dead-locked state.

            The system will continue to be in safe state. Because the system is initially in a safe state, there is a safe sequence. Now, when the new process is accepted, we can easily construct a new safe sequence with the new process at the beginning of the old safe sequence. This is because we can give the new process the resources it needs, so that the new process can finish. The new process will release all its resources upon completion. We are back to the situation that we were in before the new process is accepted.

            The system will continue to be in safe state.  Again, we can construct a new safe sequence with the new process being appended to the old safe sequence. That is, we can refrain from giving any resources to the new process until all other processes have finished and returned the resources. (This will happen because the system was in a safe state). Then, we can assign resources to the new process, which can then run to completion.

 

3a)

            Pages are fixed-size partitions while segments are variable-sized partitions. Consequently, external fragmentation arises in a segmentation scheme, but no internal fragmentation. Paging systems have no external fragmentation, but about half a page of internal fragmentation per process (on average).  Since segments give a well-defined logical address space, protection is more naturally defined in segmentation.  For example, a big array could be in a segment of its own and hence accessing array beyond its present size can be detected.  While in paging systems, two logical entities like code and data could be a part of the same page making it difficult to define meaningful protection parameters.

            One hybrid scheme involves paging the segments.  The details of which are discussed in the text book.  This scheme removes the external fragmentation problem of segments, while, at the same time, enabling meaningful protection definitions.  However, the internal fragmentation is now increased.

            One of the problems of this hybrid method is the extra memory access involved in getting hold of both the segment table entry and the page entry.  Using TLB (Translation Look-aside Buffer) could alleviate it to a certain extent.

 

3b)

            The entire page table (i.e., the content of the set of dedicated registers) has to be stored in the PCB. Thus, each context switch involves storing the content of the dedicated registers into the PCB of the old process and loading the page table information of the new process into the registers.  In the other scheme, only the base address of the page table needs to be stored in the PCB and copied during context switch---this leads to faster context switch.

            Naturally, the register-based scheme would give faster memory access, because an extra indirection (i.e., an extra memory access) could be avoided.

            Every context switch would involve flushing the TLB or resetting the TLB, so that the next process page table entries are not confused with the older ones.