Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
As shown in Figure 1.1, the major high-level components of a computer system are:
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
The region in the memory that a process is allowed to access is known as process
address space. To ensure correct operation of a computer system, we need to ensure that
a process cannot access memory outside its address space. If we don’t do this then a
process may, accidentally or deliberately, overwrite the address space of another process
or memory space belonging to the operating system (e.g., for the interrupt vector table).
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
The modularization of a system can be done in many ways. As shown in Figure 4.1, in the layered approach the OS is broken up into a number of layers or levels each built on top of lower layer. The bottom layer is the hardware; the highest layer (layer N) is the user interface. A typical OS layer (layer-M) consists of data structures and a set of routines that can be invoked by higher-level layers. Layer M in turn can invoke operations on lower level layers.
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
We discussed in detail the UNIX/Linux directory structure in lecture 4. We will continue that discussion and learn how to browse the UNIX/Linux directory structure. In Figure 5.1, we have repeated for our reference the home directory structure for students. In the rest of this section, we discuss commands for creating directories, removing directories, and browsing the UNIX/Linux directory structure.
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
The processes in the system execute concurrently and they must be created and deleted dynamically thus the operating system must provide the mechanism for the creation and deletion of processes.
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
The wait system call suspends the calling process until one of the immediate children
terminate, or until a child that is being traced stops because it has hit an event of interest.
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
IPC provides a mechanism to allow processes to communicate and to synchronize their actions without sharing the same address space. We discuss in this section the various message passing techniques and issues related to them.
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
The UNIX and Linux operating systems provide many tools for interprocess communication (IPC). The three most commonly used tools are:
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
Linux redirection features can be used to detach the default files from stdin, stdout, and
stderr and attach other files with them for a single execution of a command. The act of
detaching defaults files from stdin, stdout, and stderr and attaching other files with them
is known as input, output, and error redirection. In this section, we show the syntax,
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
We continue to discuss the API for using FIFOs for IPC between UNIX/Linux processes.
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
In the last lecture, we started discussing a few UNIX/Linux process management command. In particular, we discussed the ps and top commands. We now discuss the fg, bg, jobs, and kill commands and <Ctrl-Z> and <Ctrl-C> key presses.
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
Support for threads may be provided at either user level for user threads or by kernel for kernel threads.
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
The objective of multiprogramming is to have some process running at all times, in order
to maximize CPU utilization. In a uniprocessor system, only one process may run at a
time; any other processes much wait until the CPU is free and can be rescheduled.
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
This algorithm associates with ach process the length of the latter’s next CPU burst.
When the CPU is available, it is assigned to the process that has the smallest next CPU
burst. If two processes have the same length next CPU burst, FCFS scheduling is used to
break the tie. The real difficulty with the SJF algorithm is in knowing the length of the
next CPU request. For long term scheduling in a batch system, we can use as the length
the process time limit that a user specifies when he submits the job.
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
SJF is an optimal algorithm because it decreases the wait times for short processes much more than it increases the wait times for long processes. Let’s consider the example shown in Figure 16.1, in which the next CPU bursts of P1, P2, and P3 are 5, 3, and 2, respectively. The first Gantt chart shows execution of processes according to the longestjob- first algorithm, resulting in the waiting times for P1, P2, and P3 to be 0, 5, and 8 times units. The second Gantt chart shows execution of processes according to the shortest-job-first algorithm, resulting in the waiting times for P1, P2, and P3 to be 0, 2, and 5. Note that the waiting time for P2 has decreased from 5 to 2 and that of P3 has decreased from 8 to 0. The increase in the wait time for P1 is from 0 to 5, which is much smaller than the decrease in the wait times for P2 and P3.
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
Multilevel feedback queue scheduling allows a process to move between queues. The
idea is to separate processes with different CPU burst characteristics. If a process uses too
much CPU time, it will be moved to a lower-priority queue. This scheme leaves I/O
bound and interactive processes in the higher-priority queues. Similarly a process that
waits too long in a lower-priority queue may be moved o a higher priority queue. This
form of aging prevents starvation.
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
Concurrent processes or threads often need access to shared data and shared resources. If there is no controlled access to shared data, it is often possible to obtain an inconsistent state of this data. Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes, and hence various process synchronization methods are used. In the producer-consumer problem that was discussed earlier, the version only allows one item less than the buffer size to be stored, to provide a solution for the buffer to use its entire capacity of N items is not simple. The producer and consumer share data structure ‘buffer’ and use other variables shown below:
Read more: CS604 - Operating Systems - Lecture Handout 18 And 19
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
We discussed two solutions for the 2-process critical section problem in lecture 19 but both were not acceptable because they did not satisfy the progress condition. Here is a good solution for the critical section problem that satisfies all three requirements of a good solution.
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
In this section, we discuss some simple hardware (CPU) instructions that can be used to
provide synchronization between processes and are available on many systems.
The critical section problem can be solved simply in a uniprocessor environment if
we could forbid interrupts to occur while a shared variable is being modified. In this
manner, we could be sure that the current sequence of instructions would be run, so no
unexpected modifications could be made to the shared variable.
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
In lecture 21 we started discussing the hardware solutions for the critical section problem.
We discussed two possible solutions but realized that whereas both solutions satisfied the
mutual exclusion and bounded waiting conditions, neither satisfied the progress
condition. We now describe a solution that satisfies all three requirements of a solution to
the critical section problem.
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
The main disadvantage of the semaphore discussed in the previous section is that they all
require busy waiting. While a process is in its critical section, any other process that tries
to enter its critical section must loop continuously in the entry code. This continual
looping is clearly a problem in a real multiprogramming system, where a single CPU is
shared among many processes. Busy waiting wastes CPU cycles that some other process
may be able to use productively. This type of semaphore is also called a spinlock (because the process spins while waiting for the lock). Spinlocks are useful in
multiprocessor systems. The advantage of a spinlock is that no context switch is required
when a process must wait on a lock, and a context switch may take considerable time.
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
There are two kinds of semaphores:
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
Several possibilities that remedy the deadlock situation discussed in the last lecture are listed. Each results in a good solution for the problem.
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
Let us illustrate these concepts by presenting a deadlock free solution to the dining philosophers problem. Recall that a philosopher is allowed to pick up her chopsticks only if both of them are available. To code this solution we need to distinguish among three states in which a philosopher may be. For this purpose we introduce the following data structure:
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
We can deal with deadlocks in a number of ways:
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
In addition to the request and assignment edges explained in the previous lectures, we
introduce a new type of edge called a claim edge to resource allocation graphs. A claim
edge Pi →Rj indicates that process Pi may request resource Rj at some time in the future.
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
If a system does not employ either a deadlock prevention or a deadlock avoidance algorithm then a deadlock may occur. In this environment, the system must provide:
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
Selection of memory-management method for a specific system depends on many factors
especially on the hardware design of the system. Recent designs have integrated the
hardware and operating system.
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
To enable a process to be larger than the amount of memory allocated to it, we can use overlays. The idea of overlays is to keep in memory only those instructions and data that are needed at any given time. When other instructions are needed, they are loaded into space occupied previously by instructions that are no longer needed. We illustrate the concept of overlays with the example of a two-pass compiler. Here are the various specifications:
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
This is the generalization of the fixed partition scheme. It is used primarily in a batch environment. This scheme of memory management was first introduced in IBM OS/MVT (multiprogramming with a varying number of tasks). Here are the main characteristics of MVT.
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
The page size is defined by the CPU hardware. If the size of logical address space is 2m and a page size is 2n addressing units (bytes or words) , then the high-order m-n bits of a logical address designate the page number and the n low order bits designate offset within the page. Thus, the logical address is as follows:
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
Memory protection in paging is achieved by associating protection bits with each page.
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
Another advantage of paging is the possibility of sharing common code. Reentrant (readonly)
code pages of a process address can be shared. If the code is reentrant, it never
changes during execution. Thus two or more processes can execute the same code at the
same time. Each process has its own copy of registers and data storage to hold the data
for the process’ execution. The data for two different processes will, of course, vary for
each process. Consider the case when multiple instances of a text editor are invoked.
Only one copy of the editor needs to be kept in the physical memory. Each user’s page
table maps on to the same physical copy of the editor, but data pages are mapped onto
different frames. Thus to support 40 users, we need only one copy of the editor, which
results in saving total space.
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
In paged segmentation, we divide every segment in a process into fixed size pages.
We need to maintain a page table per segment CPU’s memory management unit must
support both segmentation and paging. The following snapshots illustrate these points.
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
We discussed logical to physical address translation in the real mode operation of the Intel 80386 processor in the last lecture. Here we discuss address translation in the protected mode.
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
When there is no free frame available, page replacement is required, and we must select the pages to be replaced. This can be done via several replacement algorithms, and the major criterion in the selection of a particular algorithm is that we want to minimize the number of page faults. The victim page that is selected depends on the algorithm used, it might be the least recently used page, or the most frequently used etc depending on the algorithm.
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
While a user process is executing, a page fault occurs. The hardware traps to the
operating system, which checks its internal tables to see that this page is a genuine one
rather than an illegal memory access. The operating system determines where the desired
page is residing on the disk, but then finds that there are no free frames on the free frame
list: All memory is in use.
The operating system has several options at his point. It could terminate the user
process. However, demand paging is the operating system’s attempt to improve the
computer system’s utilization and throughput. Users’ should not be aware that their
processes are running on a paged system – paging should be logically transparent to the
user. So this option is not the best choice. The operating system could swap out a process,
but that would reduce the level of multiprogramming. So we explore page replacement.
This means that if no free frame is available on a page fault, we replace a page in
memory to load the desired page. The page-fault service routine is modified to include
page replacement. We can free a frame by writing its contents to swap space, and
changing the page table to indicate that the page is no longer in memory. The modified
page fault service routine is:
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
A tree structure prohibits sharing of files. An acyclic graph allows directories to have shared subdirectories and files. The same file may be in two different directories.
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
The need to protect files is a direct result of the ability to access files. Systems that do not permit access to the files of other users do not need protection. Thus we could provide complete protection by prohibiting access. Alternatively we could provide free access with no protection. Both approaches are too extreme for general use. What is needed is controlled access. File owner/creator should be able to control
Related Content: CS604 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Operating Systems
Indexed allocation supports direct access without suffering from external fragmentation because any free block on the disk may satisfy a request for more space. Depending on the disk block size and file system size, a file may need more than one index block. In this case there are two ways of organizing index blocks: