MCA2010- OPERATING SYSTEM
Q1.) Differentiate Simple Batch Operating Systems
and Time-sharing Operating systems.
Ans.) Simple Batch Operating Systems: -
In the
earliest days digital computers usually run from a console. I/O devices
consisted of card readers, tape drivers & line printers. Direct user
interaction with the system did not exist. Users made a job consisting of
programs, data & control information. The job was submitted to an operator
who would execute the job on the computer system. The output appeared after
minutes, hours or sometimes days. The user collected the output from the
operator, which also included a memory dump. The operating system was very
simple & its major task was to transfer control from one job to another.
The operating system was resident in memory.
To speed up
processing, jobs with the same needs were batched together & executed as a
group. For ex, all FORTRAN jobs were batched together for execution; all COBOL
jobs were batched together for execution and so on. Thus batch operating system
came into existence. Even though processing speed increased to a large extent
because of batch processing, the CPU was often idle. This is because of the
disparity between operating speeds of electronic devices like the CPU & the
mechanical I/O devices. CPU operates in the microsecond / nanosecond ranges
whereas I/O devices work in the second / minute range. To overcome this problem
the concept of SPOOLING came into picture. Instead of reading from slow input devices like card readers into the
computer memory & then processing the job, the input is first read into the
disk. When the job is processed or executed, the input is read directly from the
disk. Similarly when a job is executed for printing, the output is written into
a buffer on the disk & actually printed later. This form of processing is known
as spooling an acronym for Simultaneous Peripheral Operation on Line. Spooling
uses the disk as a large buffer to read ahead as possible on input devices and
for storing output until output devices are available to accept them.
Spooling
overlaps I/O of one job with the computation of other jobs. For example,
spooler may be reading the input of one job while printing the output of
another and executing a third job Spooling increases the performance of the
system by allowing both a faster CPU and slower I/O devices to work at higher
operating rates.
Time-sharing
Operating systems: -
Multi-programmed
batch systems are suited for executing large jobs that need very little or no
user interaction. On the other hand interactive jobs need on-line communication
between the user and the system. Timesharing / multi tasking is a logical
extension of multi-programming. It provides interactive use of the system.
CPU
scheduling and multi-programming provide each user one time slice (slot) in a
time-shared system. A program in execution is referred to as a process. A
process executes for one time slice at a time. A process may need more than one
time slice to complete. During a time slice a process may finish execution or
go for an I/O. The I/O in this case is usually interactive like a user response
from the keyboard or a display on the monitor. The CPU switches between
processes at the end of a time slice. The switching between processes is so
fast that the user gets the illusion that the CPU is executing only one user’s process.
Time-sharing
operating systems are more complex than multi-programmed operating systems.
This is because in multi-programming several jobs must be kept simultaneously
in memory, which requires some form of memory management and protection. Jobs
may have to be swapped in and out of main memory to the secondary memory. To
achieve this goal a technique called virtual memory is used.
Virtual
memory allows execution of a job that may not be completely in a memory. The
main logic behind virtual memory is that programs to be executed can be larger
physical memory, its execution takes place.
Multi-programming
and Time-sharing are the main concepts in all modern operating systems.
--------------------------------------------------------------------------------------------------------------------------
Q2.) Explain the different process states.
Ans.) A process is executed sequentially, one instruction at a
time. A program is a passive entity. Example: a file on the disk. A process on
the other hand is an active entity. In addition to program code, it includes
the values of the program counter, the contents of the CPU registers, the
global variables in the data section and the contents of the stack that is used
for subroutine calls. In reality, the CPU switches back and forth among
processes.
A process
being an active entity, changes state as execution proceeds. A process can be
any one of the following states:
·
New:
Process being created.
·
Running:
Instructions being executed.
·
Waiting
(Blocked): Process waiting for an event to occur.
·
Ready:
Process waiting for CPU.
·
Terminated:
Process has finished execution.
Locally, the
‘Running’ and ‘Ready’ states are similar. In both cases the process is willing
to run, only in the case of ‘Ready’ state, there is temporarily no CPU
available for it. The ‘Blocked’ state is different from the ‘Running’ and
‘Ready’ states in that the process cannot run, even if the CPU is available.
These above
states are arbitrary and very between operating systems. Certain operating
systems also distinguish among more finely delineating process states. It is
important to realize that only one process can be running on any processor at
any instant. Many processes may be ready and waiting. A state diagram is used
to diagrammatically represent the states and also the events that trigger the
change of state of a process in execution.
--------------------------------------------------------------------------------------------------------------------------
Q3.) Define Deadlock. Explain necessary
conditions for deadlock.
Ans.) Deadlock occurs when we have a set of processes [not
necessarily all the processes in the system], each holding some resources, each
requesting some resources, and none of them is able to obtain what it needs,
i.e. to make progress. We will usually reason in terms of resources R1,
R2... Rm and processes P1, P2... Pn. A process Pi that is waiting for some
currently unavailable resource is said to be blocked.
Resources
can be pre-emptable or non-pre-emptable. A resource is pre-emptable
if it can be taken away from the process that is holding it [we can think that
the original holder waits, frozen, until the resource is returned to it]. Memory
is an example of a pre-emptable resource. Of course, one may choose to deal
with intrinsically pre-emptable resources as if they were non-pre-emptable. In
our discussion we only consider non-pre-emptable resources.
Necessary
conditions for deadlock: -
1. Mutual exclusion: At least one of the resources is
non-sharable, that is, only one process at a time can use the resource.
2. Hold and wait: A process exists that is holding on
to at least one resource and waiting for an additional resource held by another
process.
3. No preemption: Resources cannot be preempted, that
is, a resource is released only by the process that is holding it.
4. Circular wait: There exist a set of processes P0,
P1… Pn of waiting processes such that P0 is waiting for a
resource held by P1, P1 is waiting for a resource held by
P2… Pn-1 is waiting for a resource held Pn and
Pn is in turn waiting for a resource held by P0.
--------------------------------------------------------------------------------------------------------------------------
Q4.) Differentiate Sequential access and Direct
access methods.
Ans.) Sequential Access: -
In this
simple access method, information in a file is accessed sequentially one record
after another. To process the ith record all the i-1 records previous
to i must be accessed. Sequential access is based on the tape model that is
inherently a sequential access device. Sequential access is best suited where
most of the records in a file are to be processed. For example, transaction
files.
Direct
Access: -
Sometimes it
is not necessary to process every record in a file. It may not be necessary to
process records in the order in which they are present. Information present in
a record of a file is to be accessed only if some key value in that record is
known. In all such cases, direct access is used. Direct access is based on the
disk that is a direct access device and allows random access of any file block.
Since a file is a collection of physical blocks, any block and hence the
records in that block are accessed. For example, master files. Databases are
often of this type since they allow query processing that involves immediate
access to large amounts of information. All reservation systems fall into this
category. Not all operating systems support direct access files. Usually files
are to be defined as sequential or direct at the time of creation and accessed
accordingly later. Sequential access of a direct access file is possible but
direct access of a sequential file is not.
--------------------------------------------------------------------------------------------------------------------------
Q5.) Differentiate Daisy chain bus arbitration
and Priority encoded bus arbitration.
Ans.) Daisy chain arbitration: Here, the requesting device or
devices assert the signal bus request. The bus arbiter returns the bus grant
signal, which passes through each of the devices which can have access to the
bus. Here, the priority of a device depends solely on its position in the daisy
chain. If two or more devices request the bus at the same time, the highest
priority device is granted the bus first, and then the bus grant signal is passed
further down the chain. Generally a third signal (bus release) is used to
indicate to the bus arbiter that the first device has finished its use of the
bus. Holding bus request asserted indicates that another device wants to use
the bus.
Priority
encoded arbitration:
Here, each device has a request line connected to a centralized arbiter that
determines which device will be granted access to the bus. The order may be fixed
by the order of connection (priority encoded), or it may be determined by some
algorithm preloaded into the arbiter. Following figure shows this type of
system. Note that each device has a separate line to the bus arbiter.
--------------------------------------------------------------------------------------------------------------------------
Q6.) Explain LRU page replacement algorithm with
example.
Ans.) LRU page replacement algorithm: -
The main
distinction between FIFO and optimal algorithm is that the FIFO algorithm uses
the time when a page was brought into memory (looks back) whereas the optimal
algorithm uses the time when a page is to be used in future (looks ahead). If
the recent past is used as an approximation of the near future, then replace
the page that has not been used for the longest period of time. This is the
least recently used (LRU) algorithm.
Illustration:
Reference
string: 7 0
1 2 0
3 0 4
2 3 0
3 2 1
2 0 1
7 0 1
Memory
frames: 3
Page
faults: 7 7 7 2 2 4
4 4 0 1 1 1
0 0 0 0
0 0 0 3 3 3 0 0
1 1 3
3 2 2 2 2 2 7
Number of
page faults = 12.
The LRU page
replacement algorithm with 12 page faults is better than the FIFO algorithm
with 15 faults. The problem is to implement the LRU algorithm. An order for the
frames by time of last use is required. Two options are feasible:
1.
By use of counters
2.
By use of stack
In the first
option using counters, each page table entry is associated with a variable to
store the time when the page was used. When a reference to the page is made,
the contents of the clock are copied to the variable in the page table for that
page. Every page now has the time of last reference to it.
According to
the LRU page replacement algorithm the least recently used page is the page
with the smallest value in the variable associated with the clock. Overheads
here include a search for the LRU page and an update of the variable to hold
clock contents each time a memory reference is made. In the second option a
stack is used to keep track of the page numbers. A page referenced is always
put on top of the stack. Therefore the top of the stack is the most recently
used page and the bottom of the stack is the LRU page. Since stack contents in
between need to be changed, the stack is best implemented using a doubly linked
list. Update is a bit expensive because of the number of pointers to be
changed, but there is no necessity to search for a LRU page.
LRU page
replacement algorithm does not suffer from Belady’s anomaly. But both of the
above implementations require hardware support since either the clock variable
or the stack must be updated for every memory reference.
--------------------------------------------------------------------------------------------------------------------------
No comments:
Post a Comment