Spread Knowledge

CS604 - Operating Systems - Lecture Handout 31

User Rating:  / 0
PoorBest 

Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems

Summary

  • Overlays
  • Swapping
  • Contiguous memory allocation
  • MFT

Overlays

To enable a process to be larger than the amount of memory allocated to it, we can use overlays. The idea of overlays is to keep in memory only those instructions and data that are needed at any given time. When other instructions are needed, they are loaded into space occupied previously by instructions that are no longer needed. We illustrate the concept of overlays with the example of a two-pass compiler. Here are the various specifications:

  • 2-Pass assembler/compiler
  • Available main memory: 150k
  • Code size: 200k
    • Pass 1 ……………….. 70k
    • Pass 2 ……………….. 80k
    • Common routines …... 30k
    • Symbol table ………… 20k

Common routines, symbol table, overlay driver, and Pass 1 code are loaded into the main memory for the program execution to start. When Pass 1 has finished its work, Pass 2 code is loaded on top of the Pass 1 code (because this code is not needed anymore). This way, we can execute a 200K process in a 150K memory. The diagram below shows this pictorially.
The problems with overlays are that a) you may not be able to partition all problems into overlays, and b) programmer is responsible of writing the overlays driver.

Overlays Example

Swapping

A process needs to be in the memory to be executed. A process, however, can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution. Backing store is a fast disk large enough to accommodate copies of all memory images for all users; it must provide direct access to these memory images. The system maintains a ready queue of all processes whose memory images are on the backing store or in memory and are ready to run.
For example, assume a multiprogramming environment with a round robin CPU scheduling algorithm. When a quantum expires, the memory manager will start to swap out the process that just finished, and to swap in another process to the memory space that has been freed. A variant of this swapping policy can be used for priority-based
scheduling algorithms. If a higher-priority process arrives and wants service, the memory manger can swap out the lower-priority process so that it can load and execute the higher- -priority process. When the higher--priority process finishes, the lower--priority process can be swapped back in and continued. This technique is called roll out, roll in.
The major part of swap time is transfer time; the total transfer time is directly proportional to the amount of memory swapped.
Swapping is constrained by factors like quantum for RR scheduler and pending I/O for swapped out process. Assume that I/O operation was queued because the device was busy. Then if we were to swap out P1, and swap in process P2, the I/O operation might attempt to access memory that now belongs to P2.The solution to this problem are never to swap out processes with pending I/O or to execute I/O in kernel space

Schematic View of Swapping

Cost of Swapping

Process size = 1 MB
Transfer rate = 5 MB/sec
Swap out time = 1/5 sec
= 200 ms
Average latency = 8 ms
Net swap out time = 208 ms
Swap out + swap in = 416 ms

Contiguous memory allocation

The main memory must accommodate both operating system and the various user spaces.
Thus memory allocation should be done efficiently.
The memory is usually divided into two partitions: one for the resident operating system and one for the user processes. The operating system may be placed in the high memory or the low memory. The position of the interrupt vector usually affects this decision. Since the interrupt vector is often in the low memory, programmers place the OS in low memory too.

  • It is desirable to have several user processes residing in the memory at the same time. In contiguous memory allocation, each process is contained in a single contiguous section of memory. The base (re-location) and limit registers are used to point to the smallest memory address of a process and its size, respectively.

Contiguous Allocation

Multiprogramming with Fixed Tasks (MFT)

In this technique, memory is divided into several fixed-size partitions. Each partition may contain exactly one process. Thus the degree of multiprogramming is bound by the number of partitions. In this multiple partition method, when a partition is free, a process is selected from the input queue and is loaded in the free partition. When the process terminates, the partition becomes available for another process.

  • This was used by IBM for system 360 OS/MFT (multiprogramming with a fixed number of tasks).
  • Can have a single input queue instead of one for each partition.
    • So that if there are no big jobs can use big partition for little jobs.
    • Can think of the input queue(s) as the ready list(s) with a scheduling policy of FCFS in each partition.
  • The partition boundaries are not movable and are set at boot time (must reboot to move a job).
    • MFT can have large internal fragmentation, i.e., wasted space inside a region
  • Each process has a single ``segment'' (we will discuss segments later)
    • No sharing between processes.
    • No dynamic address translation.
    • At load time must ``establish addressability''.
      • Must set a base register to the location at which the process was loaded (the bottom of the partition).
      • The base register is part of the programmer visible register set.
      • This is an example of address translation during load time.
      • Also called relocation.
  • Storage keys are adequate for protection (IBM method).
  • Alternative protection method is base/limit registers.
  • An advantage of base/limit is that it is easier to move a job.
  • But MFT didn't move jobs so this disadvantage of storage keys is moot.

Multiprogramming

MFT with multiple queues involves load-time address binding. In this technique, there is a potential for wasted memory space i.e. an empty partition but no process in the associated queue. However in MFT with single queue there is a single queue for each partition. The queue is searched for a process when a partition becomes empty. First-fit, best-fit, worst-fit space allocation algorithms can be applied here. The following diagram shows MFT with single input queue.

Multiprogramming with Fixed Tasks

Multiprogramming with Fixed Tasks (MFT) with one input queue