Explore topic-wise MCQs in Master of Science in Computer Science (M.Sc CS).

This section includes 104 Mcqs, each offering curated multiple-choice questions to sharpen your Master of Science in Computer Science (M.Sc CS) knowledge and support exam preparation. Choose a topic below to get started.

1.

Functional Decomposition:

A. Partitioning in that the data associated with a problem is decompose(D) Each parallel task then works on a portion of the dat(A)
B. Partitioning in that, the focus is on the computation that is to be performed rather than on the data manipulated by the computation. The problem is decomposed according to the work that must be done. Each task then performs a portion of the overall work.
C. It is the time it takes to send a minimal (0 byte) message from point A to point (B)
D. None of these
Answer» C. It is the time it takes to send a minimal (0 byte) message from point A to point (B)
2.

Here a single program is executed by all tasks simultaneously. At any moment in time, tasks can be executing the same or different instructions within the same program. These programs usually have the necessary logic programmed into them to allow different tasks to branch or conditionally execute only those parts of the program they are designed to execute.

A. Single Program Multiple Data (SPMD)
B. Multiple Program Multiple Data (MPMD)
C. Von Neumann Architecture
D. None of these
Answer» B. Multiple Program Multiple Data (MPMD)
3.

Parallel Execution

A. A sequential execution of a program, one statement at a time
B. Execution of a program by more than one task, with each task being able to execute the same or different statement at the same moment in time
C. A program or set of instructions that is executed by a processor.
D. None of these
Answer» C. A program or set of instructions that is executed by a processor.
4.

Coarse-grain Parallelism

A. In parallel computing, it is a qualitative measure of the ratio of computation to communication
B. Here relatively small amounts of computational work are done between communication events
C. Relatively large amounts of computa- tional work are done between communication / synchronization events
D. None of these
Answer» D. None of these
5.

Serial Execution

A. A sequential execution of a program, one statement at a time
B. Execution of a program by more than one task, with each task being able to execute the same or different statement at the same moment in time
C. A program or set of instructions that is executed by a processor.
D. None of these
Answer» B. Execution of a program by more than one task, with each task being able to execute the same or different statement at the same moment in time
6.

Load balancing is

A. Involves only those tasks executing a communication operation
B. It exists between program statements when the order of statement execution affects the results of the program.
C. It refers to the practice of distributing work among tasks so that all tasks are kept busy all of the time. It can be considered as minimization of task idle time.
D. None of these
Answer» D. None of these
7.

Uniform Memory Access (UMA) referred to

A. Here all processors have equal access and access times to memory
B. Here if one processor updates a location in shared memory, all the other processors know about the update.
C. Here one SMP can directly access memory of another SMP and not all processors have equal access time to all memories
D. None of these
Answer» B. Here if one processor updates a location in shared memory, all the other processors know about the update.
8.

Point-to-point communication referred to

A. It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective.
B. It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.*
C. It allows tasks to transfer data independently from one another.
D. None of these
Answer» C. It allows tasks to transfer data independently from one another.
9.

Asynchronous communications

A. It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective.
B. It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.
C. It allows tasks to transfer data independently from one another.
D. None of these
Answer» D. None of these
10.

In shared Memory

A. Multiple processors can operate independently but share the same memory resources
B. Multiple processors can operate independently but do not share the same memory resources
C. Multiple processors can operate independently but some do not share the same memory resources
D. None of these
Answer» B. Multiple processors can operate independently but do not share the same memory resources
11.

In designing a parallel program, one has to break the problem into discreet chunks of work that can be distributed to multiple tasks. This is known as

A. Decomposition
B. Partitioning
C. Compounding
D. Both A and B
Answer» E.
12.

Data dependence is

A. Involves only those tasks executing a communication operation
B. It exists between program statements when the order of statement execution affects the results of the program.
C. It refers to the practice of distributing work among tasks so that all tasks are kept busy all of the time. It can be considered as minimization of task idle time.
D. None of these
Answer» C. It refers to the practice of distributing work among tasks so that all tasks are kept busy all of the time. It can be considered as minimization of task idle time.
13.

Domain Decomposition

A. Partitioning in that the data associated with a problem is decompose(D) Each parallel task then works on a portion of the dat(A)
B. Partitioning in that, the focus is on the computation that is to be performed rather than on the data manipulated by the computation. The problem is decomposed according to the work that must be done. Each task then performs a portion of the overall work.
C. It is the time it takes to send a minimal (0 byte) message from point A to point (B)
D. None of these
Answer» B. Partitioning in that, the focus is on the computation that is to be performed rather than on the data manipulated by the computation. The problem is decomposed according to the work that must be done. Each task then performs a portion of the overall work.
14.

Synchronous communication operations referred to

A. Involves only those tasks executing a communication operation
B. It exists between program statements when the order of statement execution affects the results of the program.
C. It refers to the practice of distributing work among tasks so that all tasks are kept busy all of the time. It can be considered as minimization of task idle time.
D. None of these
Answer» B. It exists between program statements when the order of statement execution affects the results of the program.
15.

In shared Memory:

A. Here all processors access, all memory as global address space
B. Here all processors have individual memory
C. Here some processors access, all memory as global address space and some not
D. None of these
Answer» B. Here all processors have individual memory
16.

Massively Parallel

A. Observed speedup of a code which has been parallelized, defined as: wall-clock time of serial execution and wall-clock time of parallel execution
B. The amount of time required to coordinate parallel tasks. It includes factors such as: Task start-up time, Synchronizations, Data communications.
C. Refers to the hardware that comprises a given parallel system - having many processors
D. None of these
Answer» C. Refers to the hardware that comprises a given parallel system - having many processors
17.

Parallel Overhead is

A. Observed speedup of a code which has been parallelized, defined as: wall-clock time of serial execution and wall-clock time of parallel execution
B. The amount of time required to coordi- nate parallel tasks. It includes factors such as: Task start-up time, Synchro- nizations, Data communications.
C. Refers to the hardware that comprises a given parallel system - having many processors
D. None of these
Answer» C. Refers to the hardware that comprises a given parallel system - having many processors
18.

Scalability refers to a parallel system’s (hardware and/or software) ability

A. To demonstrate a proportionate increase in parallel speedup with the removal of some processors
B. To demonstrate a proportionate increase in parallel speedup with the addition of more processors
C. To demonstrate a proportionate decrease in parallel speedup with the addition of more processors
D. None of these
Answer» C. To demonstrate a proportionate decrease in parallel speedup with the addition of more processors
19.

Non-Uniform Memory Access (NUMA) is

A. Here all processors have equal access and access times to memory
B. Here if one processor updates a location in shared memory, all the other processors know about the update.
C. Here one SMP can directly access memory of another SMP and not all processors have equal access time to all memories
D. None of these
Answer» D. None of these
20.

These computer uses the stored-program concept. Memory is used to store both program and data instructions and central processing unit (CPU) gets instructions and/ or data from memory. CPU, decodes theinstructions and then sequentially performs them.

A. Single Program Multiple Data (SPMD)
B. Flynn’s taxonomy
C. Von Neumann Architecture
D. None of these
Answer» D. None of these
21.

Collective communication

A. It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective.
B. It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.
C. It allows tasks to transfer data independently from one another.
D. None of these
Answer» B. It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.
22.

Parallel computing can include

A. Single computer with multiple processors
B. Arbitrary number of computers connec- ted by a network
C. Combination of both A and B
D. None of these
Answer» D. None of these
23.

Latency is

A. Partitioning in that the data associated with a problem is decompose(D) Each parallel task then works on a portion of the dat(A)
B. Partitioning in that, the focus is on the computation that is to be performed rather than on the data manipulated by the computation. The problem is decomposed according to the work that must be done. Each task then performs a portion of the overall work.
C. It is the time it takes to send a minimal (0 byte) message from one point to other point
D. None of these
Answer» D. None of these
24.

Synchronous communications

A. It require some type of “handshaking” between tasks that are sharing dat(A) This can be explicitly structured in code by the programmer, or it may happen at a lower level unknown to the pro- grammer.
B. It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective.
C. It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.
D. It allows tasks to transfer data independently from one another.
Answer» B. It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective.
25.

Distributed Memory

A. A computer architecture where all processors have direct access to common physical memory
B. It refers to network based memory access for physical memory that is not common
C. Parallel tasks typically need to exchange dat(A) There are several ways this can be accomplished, such as through, a shared memory bus or over a network, however the actual event of data exchange is commonly referred to as communications regardless of the method employ
Answer» C. Parallel tasks typically need to exchange dat(A) There are several ways this can be accomplished, such as through, a shared memory bus or over a network, however the actual event of data exchange is commonly referred to as communications regardless of the method employ
26.

In the threads model of parallel programming

A. A single process can have multiple, concurrent execution paths
B. A single process can have single, concurrent execution paths.
C. A multiple process can have single concurrent execution paths.
D. None of these
Answer» B. A single process can have single, concurrent execution paths.
27.

Cache Coherent UMA (CC-UMA) is

A. Here all processors have equal access and access times to memory
B. Here if one processor updates a location in shared memory, all the other processors know about the update.
C. Here one SMP can directly access memory of another SMP and not all processors have equal access time to all memories
D. None of these
Answer» C. Here one SMP can directly access memory of another SMP and not all processors have equal access time to all memories
28.

It distinguishes multi-processor computer architectures according to how they can be classified along the two independent dimensions of Instruction and Dat(A) Each of these dimensions can have only one of two possible states: Single or Multiple.

A. Single Program Multiple Data (SPMD)
B. Flynn’s taxonomy
C. Von Neumann Architecture
D. None of these
Answer» C. Von Neumann Architecture
29.

These applications typically have multiple executable object files (programs). While the application is being run in parallel, each task can be executing the same or different program as other tasks. All tasks may use different data

A. Single Program Multiple Data (SPMD)
B. Multiple Program Multiple Data (MPMD)
C. Von Neumann Architecture
D. None of these
Answer» C. Von Neumann Architecture
30.

Granularity is

A. In parallel computing, it is a qualitative measure of the ratio of computation to communication
B. Here relatively small amounts of computational work are done between communication events
C. Relatively large amounts of computa- tional work are done between communication / synchronization events
D. None of these
Answer» B. Here relatively small amounts of computational work are done between communication events
31.

It is the simultaneous use of multiplecompute resources to solve a computational problem

A. Parallel computing
B. Single processing
C. Sequential computing
D. None of these
Answer» B. Single processing
32.

Fine-grain Parallelism is

A. In parallel computing, it is a qualitative measure of the ratio of computation to communication
B. Here relatively small amounts of computational work are done between communication events
C. Relatively large amounts of computational work are done between communication / synchroni- zation events
D. None of these
Answer» C. Relatively large amounts of computational work are done between communication / synchroni- zation events
33.

Shared Memory is

A. A computer architecture where all processors have direct access to common physical memory
B. It refers to network based memory access for physical memory that is not common.
C. Parallel tasks typically need to exchange dat(A) There are several ways this can be accomplished, such as through, a shared memory bus or over a network, however the actual event of data exchange is commonly referred to as communications regardless of the method employ
Answer» B. It refers to network based memory access for physical memory that is not common.
34.

In CISC architecture most of the complex instructions are stored in _____.

A. register
B. diodes
C. cmos
D. transistors
Answer» E.
35.

Pipe-lining is a unique feature of _______.

A. risc
B. cisc
C. isa
D. iana
Answer» B. cisc
36.

Out of the following which is not a CISC machine.

A. ibm 370/168
B. vax 11/780
C. intel 80486
D. motorola a567
Answer» E.
37.

Both the CISC and RISC architectures have been developed to reduce the______.

A. cost
B. time delay
C. semantic gap
D. all of the above
Answer» D. all of the above
38.

The iconic feature of the RISC machine among the following are

A. reduced number of addressing modes
B. increased memory size
C. having a branch delay slot
D. all of the above
Answer» D. all of the above
39.

The Sun micro systems processors usually follow _____ architecture.

A. cisc
B. isa
C. ultra sparc
D. risc
Answer» E.
40.

The computer architecture aimed at reducing the time of execution of instructions is ________.

A. cisc
B. risc
C. isa
D. anna
Answer» C. isa
41.

As of 2000, the reference system to find the SPEC rating are built with _____ Processor.

A. intel atom sparc 300mhz
B. ultra sparc -iii 300mhz
C. amd neutrino series
D. asus a series 450 mhz
Answer» C. amd neutrino series
42.

CISC stands for,

A. complete instruction sequential compilation
B. computer integrated sequential compiler
C. complex instruction set computer
D. complex instruction sequential compilation
Answer» D. complex instruction sequential compilation
43.

If the instruction, Add R1,R2,R3 is executed in a system which is pipe-lined, then the value of S is (Where S is term of the Basic performance equation)

A. 3
B. ~2
C. ~1
D. 6
Answer» D. 6
44.

If a processor clock is rated as 1250 million cycles per second, then its clock period is ________ .

A. 1.9 * 10 ^ -10 sec
B. 1.6 * 10 ^ -9 sec
C. 1.25 * 10 ^ -10 sec
D. 8 * 10 ^ -10 sec
Answer» E.
45.

The average number of steps taken to execute the set of instructions can be made to be less than one by following _______ .

A. isa
B. pipe-lining
C. super-scaling
D. sequential
Answer» D. sequential
46.

As of 2000, the reference system to find the performance of a system is _____ .

A. ultra sparc 10
B. sun sparc
C. sun ii
D. none of these
Answer» B. sun sparc
47.

SPEC stands for,

A. standard performance evaluation code.
B. system processing enhancing code.
C. system performance evaluation corporation.
D. standard processing enhancement corporation.
Answer» D. standard processing enhancement corporation.
48.

The ultimate goal of a compiler is to,

A. reduce the clock cycles for a programming task.
B. reduce the size of the object code.
C. be versatile.
D. be able to detect even the smallest of errors.
Answer» B. reduce the size of the object code.
49.

An optimizing Compiler does,

A. better compilation of the given piece of code.
B. takes advantage of the type of processor and reduces its process time.
C. does better memory managament.
D. both a and c
Answer» C. does better memory managament.
50.

The clock rate of the processor can be improved by,

A. improving the ic technology of the logic circuits
B. reducing the amount of processing done in one step
C. by using overclocking method
D. all of the above
Answer» E.