Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
To enable a process to be larger than the amount of memory allocated to it, we can use overlays. The idea of overlays is to keep in memory only those instructions and data that are needed at any given time. When other instructions are needed, they are loaded into space occupied previously by instructions that are no longer needed. We illustrate the concept of overlays with the example of a two-pass compiler. Here are the various specifications:
Common routines, symbol table, overlay driver, and Pass 1 code are loaded into the main
memory for the program execution to start. When Pass 1 has finished its work, Pass 2
code is loaded on top of the Pass 1 code (because this code is not needed anymore). This
way, we can execute a 200K process in a 150K memory. The diagram below shows this
pictorially.
The problems with overlays are that a) you may not be able to partition all problems
into overlays, and b) programmer is responsible of writing the overlays driver.
A process needs to be in the memory to be executed. A process, however, can be
swapped temporarily out of memory to a backing store, and then brought back into
memory for continued execution. Backing store is a fast disk large enough to
accommodate copies of all memory images for all users; it must provide direct access to
these memory images. The system maintains a ready queue of all processes whose
memory images are on the backing store or in memory and are ready to run.
For example, assume a multiprogramming environment with a round robin CPU
scheduling algorithm. When a quantum expires, the memory manager will start to swap
out the process that just finished, and to swap in another process to the memory space
that has been freed. A variant of this swapping policy can be used for priority-based
scheduling algorithms. If a higher-priority process arrives and wants service, the memory
manger can swap out the lower-priority process so that it can load and execute the higher-
-priority process. When the higher--priority process finishes, the lower--priority process
can be swapped back in and continued. This technique is called roll out, roll in.
The major part of swap time is transfer time; the total transfer time is directly
proportional to the amount of memory swapped.
Swapping is constrained by factors like quantum for RR scheduler and pending I/O
for swapped out process. Assume that I/O operation was queued because the device was
busy. Then if we were to swap out P1, and swap in process P2, the I/O operation might
attempt to access memory that now belongs to P2.The solution to this problem are never
to swap out processes with pending I/O or to execute I/O in kernel space
Process size = 1 MB
Transfer rate = 5 MB/sec
Swap out time = 1/5 sec
= 200 ms
Average latency = 8 ms
Net swap out time = 208 ms
Swap out + swap in = 416 ms
The main memory must accommodate both operating system and the various user spaces.
Thus memory allocation should be done efficiently.
The memory is usually divided into two partitions: one for the resident operating
system and one for the user processes. The operating system may be placed in the high
memory or the low memory. The position of the interrupt vector usually affects this
decision. Since the interrupt vector is often in the low memory, programmers place the
OS in low memory too.
In this technique, memory is divided into several fixed-size partitions. Each partition may contain exactly one process. Thus the degree of multiprogramming is bound by the number of partitions. In this multiple partition method, when a partition is free, a process is selected from the input queue and is loaded in the free partition. When the process terminates, the partition becomes available for another process.
MFT with multiple queues involves load-time address binding. In this technique, there is a potential for wasted memory space i.e. an empty partition but no process in the associated queue. However in MFT with single queue there is a single queue for each partition. The queue is searched for a process when a partition becomes empty. First-fit, best-fit, worst-fit space allocation algorithms can be applied here. The following diagram shows MFT with single input queue.
Multiprogramming with Fixed Tasks (MFT) with one input queue