Friday, June 24, 2016

SMU Assignment - (Sem-2) - MCA2050- COMPUTER ARCHITECTURE

MCA2050- COMPUTER ARCHITECTURE

Q1.)    Explain the concepts of concurrent and parallel execution.
Ans.)    Concurrent Execution: - In Concurrent Execution, the key problem is scheduling the competing the processes or threads for service (execution) by a server (processor). Scheduling is done as per two rules:

·         Pre-emption rule: This rule governs whether servicing a client can be interrupted or not & if so, on what occasions. The pre-emption rule may either specify time sharing or priorities for clients or processes. Time sharing restricts continuous service for each client to a specific duration. Priorities may cause an interruption in the service for a client, which is low priority, if a higher priority client requests the service.

·         Selection rule: It states how a competing client is selected for service. The selection rule specifies an algorithm to determine a rank from parameters, such as priority & time of arrival. The ranks of all competing clients are computed, & the client with the highest rank scheduled for service.

Parallel Execution: - Parallel computing is the simultaneous use of multiple computer resources to solve a computational problem. Each part of the problem is broken down into series of instructions. The instructions from each part execute simultaneously on different CPUs.
On a multiprocessor computer, each process can be assigned a separate processor. If only one processor is available, the effect of parallel processing can be simulated by having the processor run each process by turn for a short time.
--------------------------------------------------------------------------------------------------------------------------

Q2.)    Explain any five types of addressing modes.
Ans.)    Programs are normally written in a high-level language, which enables the programmer to use constants, local and global variables, pointers and arrays. When translating a high-level language program into assembly language, the computer must be able to implement these constructs using the facilities provided in the instruction set of the computer in which the program will be run. The different ways in which the location of an operand is specified in an instruction are referred to as addressing modes.

The five types of addressing modes are: -

1.      Direct addressing mode
2.      Immediate addressing mode
3.      Register addressing mode
4.      Register Indirect addressing mode
5.      Displacement addressing mode

1.) Direct addressing mode: -

·         EA = A
·         Address field contains address of operand.
·         Effective address (EA) = address field (A) e.g. ADD A
·         Add contents of cell A to accumulator.
·         Look in memory at address A for operand.
·         Single memory reference to access data.
·         No additional calculations to work out effective address.
·         Limited address space

The operand is in memory location; the address of this location is given explicitly in the instruction. (In some assembly languages, this mode is called Direct.)
The instruction Move LOC, R2 uses these two modes. Processor registers are used as temporary storage locations where the data in a register are accessed using the Register mode. The Absolute mode can represent global variables in a program. A declaration such as Integer A, B; in a high-location language program will cause the compiler to allocate a memory location to each of the variable A & B. Whenever they are referenced later in the program, the compiler can generate assembly language instructions that use the Absolute mode to access these variables. Next, let us consider the representation of constants. Address & data constants can be represented in assembly language using the immediate mode.

2.) Immediate Addressing Mode: -

The operand is actually present in the instruction.

Operand = A

·         Can be used to define & use constants, or set initial values.
·         Operand is part of instruction
·         Operand = address field
·         e.g. ADD 5
·         Add 5 to contents of accumulator
·         5 is operand
·         No memory reference to fetch data
·         Fast
·         Limited range

For example, the instruction Move 200 immediate, R0 places the value 200 in register R0. Clearly, the immediate mode is only used to specify the value of a source operand. Using a subscript to denote the immediate mode is not appropriate in assembly languages. A common convention is to use the sharp sign (#) in front of the value to indicate that this value is to be used as an immediate operand.

Hence, we write the instruction above in the form Move # 200, R0. Constant values are used frequently in high-level language programs. For example, the statement

A = B + 6 contains 6. Assuring that A & B have been declared earlier as variables & may be accessed using the Absolute mode; this statement may be compiled as follows:

Move B, R1
Add #6, R1
Move R1, A

Constants are also used in assembly language to increment a counter, test for some bit pattern, & so on.

3.) Register Addressing Mode: -

·         Operand is held in register named in address field
·         EA = R
·         Limited number of registers
·         Very small address field needed
·         Shorter instructions
·         Faster instruction fetch
·         No memory access
·         Very fast execution
·         Very limited address space
·         Multiple registers help performance
·         Requires good assembly programming or compiler writing
                                               
4.) Register Indirect Addressing Mode: -

Register indirect addressing
EA = (R)
Operand is in memory cell pointed to by contents of register R
Large address space (2n)
Fewer memory access than indirect addressing

5.) Displacement Addressing Mode: -

·         EA = A + (R)
·         Address field hold two values
·         A = base value
·         R = register that holds displacement
·         Or vice versa
--------------------------------------------------------------------------------------------------------------------------

Q3.)    Describe the logical layout of RISC and CISC computers.
Ans.)    An important aspect of computer architecture is the design of the instruction set for the processor. The instruction set chosen for a particular computer determines the way that machine language programs are constructed. Early computers had small and simple instruction sets, forced mainly by the need to minimize the hardware used to implement them. As digital hardware became cheaper with the advent of integrated circuits, computer instructions tended to increase both in number and complexity. Many computers have instruction sets that include more than 100 and sometimes even more than 200 instructions. These computers also employ a variety of data types and a large number of addressing modes. The trend into computer hardware complexity was influenced by various factors, such as upgrading existing models to provide more customer applications, adding instructions that facilitate the translation from high-level language into machine language programs, and striving to develop machines that move functions from software implementa­tion into hardware implementation. A computer with a large number of in­structions is classified as a complex instruction set computer, abbreviated CISC. In the early 1980s, a number of computer designers recommended that computers use fewer instructions with simple constructs so they can be exe­cuted much faster within the CPU without having to use memory as often. This type of computer is classified as a reduced instruction set computer or RISC. In this section we introduce the major characteristics of CISC and RISC architec­tures and then present the instruction set and instruction format of a RISC processor.

Major Characteristics CISC architecture are: -

1.      A large number of instructions – typically from 100 to 250 instructions
2.      Some instructions that perform a specialized task & are used infrequently
3.      A large variety of addressing modes – typically from 5 to 20 different modes
4.      Variable length instruction formats
5.      Instructions that manipulate, operands in memory

Major Characteristics RISC architecture are:

1.      Relatively few instructions
2.      Relatively few addressing modes
3.      Memory access limited to load & store instructions
4.      All operations done within the registers of the CPU
5.      Fixed length, easily decoded instruction format
6.      Single cycle instruction execution
7.      Hard wired rather than micro-programmed control

Other Characteristics attributed to RISC architecture are:

1.      The relatively large number of registers in the processor unit
2.      Use of overlapped register windows to speed-up procedure call & return
3.      Efficient instruction pipeline
4.      Compiler support for efficient translation of high language programs into machine language programs
-------------------------------------------------------------------------------------------------------------------------

Q4.)    Explain concept of branch handling. What is delayed branching?
Ans.)    Branch is a flow altering instruction that we must handle in a special manner in pipelined processors. If the branch is taken, control is transferred to the instruction. If the branch is not taken, instructions available in the pipeline are used.
When the branch is taken, every instruction available in the pipeline, at different stages, is removed. Fetching of instructions begins at the target address. Due to this, the pipeline works inefficiently for three clock cycles. This process is called branch penalty.

Delayed Branching:

Another efficient way to reduce branch penalty is delayed execution. An instruction might be positioned after a branch instruction in the pipeline to produce delay. However, if the instruction slot is executed without waiting for the branch to be selected, it is called a delay slot. This means that we should put a valuable instruction in this instruction slot. We call this instruction slot a delay slot.

A number of processors, such as MIPS and SPARC make use of delayed execution for procedure calls as well as branching.

If any valuable instruction cannot be moved into delay slot, an NOP operation (no operation) is placed. This gives us the option to nullify the instruction in the delay slot.
--------------------------------------------------------------------------------------------------------------------------

Q5.)    Explain any five types of vector instructions in detail.
Ans.)    The five types of vector instructions are:

1.      Vector Reduction Instructions
2.      Vector- Scalar Instructions
3.      Vector- Vector Instructions
4.      Vector- Memory Instructions
5.      Gather & Scatter Instructions

1.) Vector Reduction Instructions: - These instructions accept one or two vectors as input & produce a scalar as output.

2.) Vector- Scalar Instructions: - This type of instruction is used in combining a scalar and a vector operand. Example: If A & B are vector registers & f is a function that operates on these operands, then, a vector-scalar operand can be shown as: Ai: = f (scalar, Bi)

3.) Vector- Vector Instructions: - This type of instruction is used for fetching vector operands from vector registers & producing results in another register. Example: If A, B, & C are vector registers & f is a function that operands on these operands, then, a vector- vector operand can be shown as: Ai: = f (Bi, Ci)

4.) Vector – Memory Instructions: - This instruction corresponds to vector load or vector store. Example for Vector Load: A: = f (M) where M is a memory register
Example for Vector Store: M; = f (A)

5.) Gather & Scatter Instructions: - Gather is an operation that fetches the non-zero elements of a sparse vector from memory. Example: A x Vo: = f (M)
Scatter stores a vector in a sparse vector into memory. Example: M: = f (A x Vo)
--------------------------------------------------------------------------------------------------------------------------

Q6.)    Write short notes on:
a) UMA
b) NUMA

Ans.)(a) UMA: - UMA is a shared memory architecture used in parallel computers. All the processors in the UMA model share the physical memory uniformly. In a UMA architecture, access time to a memory location is independent of which processor makes the request or which memory chip contains the transferred data. Uniform Memory Access computer architecture are often contrasted with Non-Uniform Memory Access (NUMA) architectures. In the UMA architecture, each processor may use a private cache. Peripherals are also shared in some fashion. The UMA model is suitable for general purpose & time sharing applications by multiple users. It can be used to speed up the execution of a single large program in time critical applications.

(b) NUMA: - NUMA is a computer memory design used in multiprocessing, where the memory access time depends on the memory location relative to a processor. A processor can access its own local memory faster than non-local memory, that is , memory local to another processor or memory shared between processors. NUMA architectures logically follow in scaling from symmetric multiprocessing (SMP) architectures. A NUMA system without cache coherence is more or less equivalent to a cluster.
 ------------------------------------------------------------------------------------------------------------------------ 

1 comment: