Homework #1    Solutions

 

1)

            The job of the long-term scheduler is to select from a pool of available and waiting processes, a set of processes to be loaded into the memory.  This controls the degree of multiprogramming.  The criteria of selection here is to load just enough processes to prevent thrashing at the same time have as many processes as possible in the memory.  The processes are selected to generate a balance between CPU bound and I/O bound processes.   The long-term scheduler is invoked infrequently usually at the time of arrival or finish of a process or when the page fault frequency falls outside the upper and lower limit or when there are too many blocked processes.

            The job of the short-term scheduler is to appropriately choose processes loaded in the memory and ready to execute and allocate CPU time to them.  The criteria of selection here is to optimize CPU utilization while maintaining acceptable response times, turnaround times etc…  The short-term scheduler is invoked at the expiry of each quantum or when a process blocks or terminates.  The different goals for the two types of scheduler require different scheduling strategies to be employed by them.

 

2a)       FCFS Scheduling:

 

 Time

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

Process

P1

P2

P3

P4

           

           

Process

Start Time

Finish Time

Turn-around Time

Waiting Time

P1

0

8

8

 

0

P2

2

11

9

(8-2)

6

P3

3

12

9

(11-3)

8

P4

4

19

15

(12-4)

8

 

Average Turn-around Time       10.25

Average Waiting Time 5.5

 

2b)       SJF Non-preemptive Scheduling:

 

Time

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

Process

P1

P3

P2

P4

           

Process

Start Time

Finish Time

Turn-around Time

Waiting Time

P1

0

8

8

 

0

P2

2

12

10

(9-2)

7

P3

3

9

6

(8-3)

5

P4

4

19

15

(12-4)

8

 

Average Turn-around Time       9.75

Average Waiting Time 5

 

2c)       SJF Preemptive Scheduling:

 

Time

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

Process

P1

P2

P3

P2

P1

P4

           

Process

Start Time

Finish Time

Turn-around Time

Waiting Time

P1

0

12

12

(6-2)

4

P2

2

6

4

(4-3)

1

P3

3

4

1

 

0

P4

4

19

15

(12-4)

8

 

Average Turn-around Time       8

Average Waiting Time 3.25

 

2d)       Round Robin Scheduling with Quantum Size 2:

 

Time

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

Process

P1

P2

P3

P4

P1

P2

P4

P1

P4

P1

P4

           

Process

Start Time

Finish Time

Turn-around Time

Waiting Time

P1

0

18

18

(7-2)+(12-9)+(16-14)

10

P2

2

10

8

(9-4)

5

P3

3

5

2

(4-3)

1

P4

4

19

15

(5-4)+(10-7)+(14-12)+(18-16)

8

 

Average Turn-around Time       10.75

Average Waiting Time 6

 

2e)       Round Robin Scheduling with Quantum Size 4:

 

Time

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

Process

P1

P2

P3

P4

P1

P4

           

Process

Start Time

Finish Time

Turn-around Time

Waiting Time

P1

0

16

16

(12-4)

8

P2

2

7

5

(4-2)

2

P3

3

8

5

(7-3)

4

P4

4

19

15

(8-4)+(16-12)

8

 

Average Turn-around Time       10.25

Average Waiting Time 5.5

 

2f)        Multilevel Feedback Queue Scheduling with Quantum Size 2, 4, and FCFS:

 

Time

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

Process

P1

P2

P3

P4

P1

P2

P4

P1

P4

           

Process

Start Time

Finish Time

Turn-around Time

Waiting Time

P1

0

18

18

(7-2)+(16-11)

10

P2

2

12

10

(11-4)

7

P3

3

5

2

(4-3)

1

P4

4

19

15

(5-4)+(12-7)+(18-16)

8

 

Average Turn-around Time       11.25

Average Waiting Time 6.5

 

3)

            The unit cost of memory increases rapidly as we demand shorter response times.  Even though there is great need for huge amounts of permanent memory, most executing processes uses a small fraction of it at a time.  Thus only a small fraction of the large memory needs to be immediately available.  This scenario is seen not only in executing processes but also in file system usage.  Thus a hierarchy is an efficient way of achieving both immediate availability and permanent availability at optimal cost.

 

4)

            User-level threads are implemented entirely in the user space with the kernel having no knowledge of them.  Thus the switching between one thread to another takes place in the user space without any need for transition into kernel space. This can be done extremely quickly and hence hundreds of threads can be easily supported.  However, the kernel schedules processes ignorant of the existence of threads.  Thus if one thread blocks because of a system call, the entire process is blocked even though there may be other threads ready to run.  In addition to this kernel schedules processes irrespective of the number of threads it supports thus making CPU allocation unfair to the processes.

            Kernel-level threads are supported by the operating system, which schedules in units of threads rather than processes.  Thus the thread-blocking problem described above does not arise while the scheduling can be made fair to the different threads.  However since the kernel is involved in the scheduling, every thread switch involves a transition form the user space to kernel space and back to user space. This greatly affects the scalability of the kernel thread support system.

 

5)

            A hybrid thread support system such as the one provided by Solaris 2 helps to prevent the thread-blocking problem without sacrificing the scalability of the thread system.  By having as many light weight processes (kernel threads) as number of concurrent blocking system calls the thread blocking problem is avoided.  Since we can have several user level threads associated with a single light-weight process, we can support hundreds of threads in the system efficiently.