Spread Knowledge

CS604 - Operating Systems - Lecture Handout 14

User Rating:  / 0
PoorBest 

Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems

Summary

  • Basic concepts
  • Scheduling criteria
  • Preemptive and non-preemptive algorithms
  • First-Come-First-Serve scheduling algorithm

Basic Concepts

The objective of multiprogramming is to have some process running at all times, in order to maximize CPU utilization. In a uniprocessor system, only one process may run at a time; any other processes much wait until the CPU is free and can be rescheduled.
In multiprogramming, a process is executed until it must wait, typically for the completion of some I/O request. In a simple computer system, the CPU would then sit idle; all this waiting time is wasted. Multiprogramming entails productive usage of this time. When one process has to wait, the OS takes the CPU away from that process and gives the CPU to another process. Almost all computer resources are scheduled before use.

Life of a Process

As shown in Figure 14.1, process execution consists of a cycle of CPU execution and I/O wait. Processes alternates between these two states. Process execution begins with a CPU burst. An I/O burst follows that, and so on. Eventually, the last CPU burst will end with a system request to terminate execution, rather than with another I/O burst.
An I/O bound program would typically have many very short CPU bursts. A CPUbound program might have a few very long CPU bursts. This distribution can help us select an appropriate CPU-scheduling algorithm. Figure 14.2 shows results on an empirical study regarding the CPU bursts of processes. The study shows that most of the
processes have short CPU bursts of 2-3 milliseconds.

Life of a Process

Histogram of CPU-burst Times

CPU Scheduler

Whenever the CPU becomes idle, the operating system must select one of the processes in the ready queue to be executed. The short-term scheduler (i.e., the CPU scheduler) selects a process to give it the CPU. It selects from among the processes in memory that are ready to execute, and invokes the dispatcher to have the CPU allocated to the selected process.
A ready queue can be implemented as a FIFO queue, a tree, or simply an unordered linked list. The records (nodes) in the ready queue are generally the process control blocks (PCBs) of processes.

Dispatcher

The dispatcher is a kernel module that takes control of the CPU from the current process and gives it to the process selected by the short-term scheduler. This function involves:

  • Switching the context (i.e., saving the context of the current process and restoring the context of the newly selected process)
  • Switching to user mode
  • Jumping to the proper location in the user program to restart that program

The time it takes for the dispatcher to stop one process and start another running is known as the dispatch latency.

Preemptive and Non-Preemptive Scheduling

CPU scheduling can take place under the following circumstances:

  1. When a process switches from the running state to the waiting state (for example, an I/O request is being completed)
  2. When a process switches from the running state to the ready state (for example when an interrupt occurs)
  3. When a process switches from the waiting state to the ready state (for example, completion of I/O)
  4. When a process terminates

In 1 and 4, there is no choice in terms of scheduling; a new process must be selected for execution. There is a choice in case of 2 and 3. When scheduling takes place only under 1 and 4, we say, scheduling is non-preemptive; otherwise the scheduling scheme is preemptive. Under non-preemptive scheduling once the CPU has been allocated to a process the process keeps the CPU until either it switches to the waiting state, finishes its CPU burst, or terminates. This scheduling method does not require any special hardware needed for preemptive scheduling.
Preemptive scheduling incurs a cost. Consider the case of two processes sharing data.
One may be in the midst of updating the data when it is preempted and the second process is run. The second process may try to read the data, which are currently in an inconsistent state. New mechanisms are needed to coordinate access to shared data. We discuss this topic in Chapter 7 of the textbook.

Scheduling Criteria

The scheduling criteria include:

  • CPU utilization: We want to keep CPU as busy as possible. In a real system it should range from 40 percent (for a lightly loaded system) to 90 percent (for a heavily used system)
  • Throughput: If CPU is busy executing processes then work is being done. One measure of work is the number of processes completed per time, called, throughput.
    We want to maximize the throughput.
  • Turnaround time: The interval from the time of submission to the time of completion is the turnaround time. Turnaround time is the sum of the periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU and doing I/O. We want to minimize the turnaround time.
  • Waiting time: Waiting time is the time spent waiting in the ready queue. We want to minimize the waiting time to increase CPU efficiency.
  • Response time: It is the time from the submission of a request until the first response is produced. Thus response time is the amount of time it takes to start responding but not the time it takes to output that response. Response time should be minimized.

Scheduling Algorithms

We will now discuss some of the commonly used short-term scheduling algorithms.
Some of these algorithms are suited well for batch systems and others for time-sharing systems. Here are the algorithms we will discuss:

  • First-Come-First-Served (FCFS) Scheduling
  • Shorted Job First (SJF) Scheduling
  • Shortest Remaining Time First (SRTF) Scheduling
  • Priority Scheduling
  • Round-Robin Scheduling
  • Multilevel Queues Scheduling
  • Multilevel Feedback Queues Scheduling
  • UNIX System V Scheduling

First-Come, First-Served (FCFS) Scheduling

The process that requests the CPU first (i.e., enters the ready queue first) is allocated the CPU first. The implementation of an FCFS policy is managed with a FIFO queue. When a process enters the ready queue, its PCB is linked onto the tail of the queue. When CPU is free, it is allocated to the process at the head of the queue. The running process is removed from the queue. The average waiting time under FCFS policy is not minimal and may vary substantially if the process CPU-burst times vary greatly. FCFS is a nonpreemptive scheduling algorithm.
We use the following system state to demonstrate the working of this algorithm. For simplicity, we assume that processes are in the ready queue at time 0.

First-Come, First-Served (FCFS) Scheduling

Suppose that processes arrive into the system in the order: P1, P2, P3. Processes are served in the order: P1, P2, P3.The Gantt chart for the schedule is shown in Figure 14.3.

Gantt chart showing execution

Here are the waiting times for the three processes and the average waiting time per process.

  • Waiting times P1 = 0; P2 = 24; P3 = 27
  • Average waiting time: (0+24+27)/3 = 17

Suppose that processes arrive in the order: P2, P3, P1. The Gantt chart for the schedule is shown in Figure 14.4:

Gantt chart showing execution of processes in the order P2, P3, P1

Here are the waiting times for the three processes and the average waiting time per process.

  • Waiting time for P1 = 6; P2 = 0; P3 = 3
  • Average waiting time: (6 + 0 + 3)/3 = 3

When FCFS scheduling algorithm is used, the convoy effect occurs when short processes wait behind a long process to use the CPU and enter the ready queue in a convoy after completing their I/O. This results in lower CPU and device utilization than might be possible if shorter processes were allowed to go first.
In the next lecture, we will discuss more scheduling algorithms.