Explore topic-wise MCQs in Computer Science Engineering (CSE).

This section includes 445 Mcqs, each offering curated multiple-choice questions to sharpen your Computer Science Engineering (CSE) knowledge and support exam preparation. Choose a topic below to get started.

151.

using different links every time and forwarding in parallel again is

A. better for congestion
B. better for reduction
C. better for communication
D. better for algorithm
Answer» B. better for reduction
152.

every node on the linear array has the data and broadcast on the columns with the linear array algorithm in _____

A. parallel
B. vertical
C. horizontal
D. all
Answer» B. vertical
153.

accumulate results and send with the same pattern is...

A. broadcast
B. naive approach
C. recursive doubling
D. reduction symmetric
Answer» E.
154.

The ____ do not snoop the messages going through them.

A. nodes
B. variables
C. tuple
D. list
Answer» B. variables
155.

all processes that have the data can send it again is

A. recursive doubling
B. naive approach
C. reduction
D. all
Answer» B. naive approach
156.

only connections between single pairs of nodes are used at a time is

A. good utilization
B. poor utilization
C. massive utilization
D. medium utilization
Answer» C. massive utilization
157.

source ____ is bottleneck.

A. process
B. algorithm
C. list
D. tuple
Answer» B. algorithm
158.

Reduction can be used to find the sum, product, maximum, minimum of _____ of numbers.

A. tuple
B. list
C. sets
D. all of above
Answer» D. all of above
159.

Goal of good algorithm is to implement commonly used _____ pattern.

A. communication
B. interaction
C. parallel
D. regular
Answer» B. interaction
160.

subsets of processes in ______ interaction.

A. global
B. local
C. wide
D. variable
Answer» C. wide
161.

All processes participate in a single ______ interaction operation.

A. global
B. local
C. wide
D. variable
Answer» B. local
162.

In collective communication operations, collective means

A. involve group of processors
B. involve group of algorithms
C. involve group of variables
D. none of these
Answer» B. involve group of algorithms
163.

All-to-all personalized communication can be used in ____

A. fourier transform
B. matrix transpose
C. sample sort
D. all of the above
Answer» E.
164.

efficiency of data parallel algorithm depends on the

A. efficient implementation of the algorithm
B. efficient implementation of the operation
C. both
D. none
Answer» C. both
165.

Which is also called "Total Exchange" ?

A. all-to-all broadcast
B. all-to-all personalized communication
C. all-to-one reduction
D. none
Answer» C. all-to-one reduction
166.

In Scatter Operation on Hypercube, on each step, the size of the messages communicated is ____

A. tripled
B. halved
C. doubled
D. no change
Answer» C. doubled
167.

One-to-All Personalized Communication operation is commonly called ___

A. gather operation
B. concatenation
C. scatter operation
D. none
Answer» D. none
168.

The dual of the scatter operation is the

A. concatenation
B. gather operation
C. both
D. none
Answer» D. none
169.

The all-to-all broadcast on Hypercube needs ____ steps

A. p
B. sqrt(p) - 1
C. log p
D. none
Answer» D. none
170.

In All to All on Hypercube, The size of the message to be transmitted at the next step is ____ by concatenating the received message with their current data

A. doubled
B. tripled
C. halfed
D. no change
Answer» B. tripled
171.

In the second phase of 2D Mesh All to All, the message size is ___

A. m
B. p*sqrt(m)
C. p
D. m*sqrt(p)
Answer» E.
172.

In the first phase of 2D Mesh All to All, the message size is ___

A. p
B. m*sqrt(p)
C. m
D. p*sqrt(m)
Answer» D. p*sqrt(m)
173.

All-to-all broadcast algorithm for the 2D mesh is based on the

A. linear array algorithm
B. ring algorithm
C. both
D. none
Answer» C. both
174.

The dual of all-to-all broadcast is

A. all-to-all reduction
B. all-to-one reduction
C. both
D. none
Answer» B. all-to-one reduction
175.

Which is known as Broadcast?

A. one-to-one
B. one-to-all
C. all-to-all
D. all-to-one
Answer» C. all-to-all
176.

Which is known as Reduction?

A. all-to-one
B. all-to-all
C. one-to-one
D. one-to-all
Answer» B. all-to-all
177.

All-to-one communication (reduction) is the dual of ______ broadcast.

A. all-to-all
B. one-to-all
C. one-to-one
D. all-to-one
Answer» C. one-to-one
178.

Communication between two directly link nodes

A. cut-through routing
B. store-and-forward routing
C. nearest neighbour communication
D. none
Answer» D. none
179.

Cost Analysis on a mesh is

A. 2ts(sqrt(p) + 1) + twm(p - 1)
B. 2tw(sqrt(p) + 1) + tsm(p - 1)
C. 2tw(sqrt(p) - 1) + tsm(p - 1)
D. 2ts(sqrt(p) - 1) + twm(p - 1)
Answer» E.
180.

Cost Analysis on a ring is

A. (ts + twm)(p - 1)
B. (ts - twm)(p + 1)
C. (tw + tsm)(p - 1)
D. (tw - tsm)(p + 1)
Answer» B. (ts - twm)(p + 1)
181.

Broadcast and reduction operations on a mesh is performed

A. along the rows
B. along the columns
C. both a and b concurrently
D. none of these
Answer» D. none of these
182.

___ can be performed in an identical fashion by inverting the process.

A. recursive doubling
B. reduction
C. broadcast
D. none of these
Answer» C. broadcast
183.

Group communication operations are built using which primitives?

A. one to all
B. all to all
C. point to point
D. none of these
Answer» D. none of these
184.

Similar communication pattern to all-to-all broadcast except in the_____

A. reverse order
B. parallel order
C. straight order
D. vertical order
Answer» B. parallel order
185.

The gather Operation is exactly the inverse of _____

A. scatter operation
B. recursion operation
C. execution
D. none
Answer» B. recursion operation
186.

In the scatter operation ____ node send message to every other node

A. single
B. double
C. triple
D. none
Answer» B. double
187.

If we port algorithm to higher dimemsional network it would cause

A. error
B. contention
C. recursion
D. none
Answer» C. recursion
188.

It is not possible to port ____ for higher dimensional network

A. algorithm
B. hypercube
C. both
D. none
Answer» B. hypercube
189.

All nodes collects _____ message corresponding to √p nodes to their respectively

A. √p
B. p
C. p+1
D. p-1
Answer» B. p
190.

The second communication phase is a columnwise ______ broadcast of consolidated

A. all-to-all
B. one -to-all
C. all-to-one
D. point-to-point
Answer» B. one -to-all
191.

Each node first sends to one of its neighbours the data it need to....

A. broadcast
B. identify
C. verify
D. none
Answer» B. identify
192.

The algorithm terminates in _____ steps

A. p
B. p+1
C. p+2
D. p-1
Answer» E.
193.

Generalization of broadcast in Which each processor is

A. source as well as destination
B. only source
C. only destination
D. none
Answer» B. only source
194.

logical operators used in algorithm are

A. xor
B. and
C. both
D. none
Answer» D. none
195.

In a broadcast and reduction on a balanced binary tree reduction is done in ______

A. recursive order
B. straight order
C. vertical order
D. parallel order
Answer» B. straight order
196.

one to all broadcast use

A. recursive doubling
B. simple algorithm
C. both
D. none
Answer» B. simple algorithm
197.

The processors compute ______ product of the vector element and the loval matrix

A. local
B. global
C. both
D. none
Answer» B. global
198.

wimpleat way to send p-1 messages from source to the other p-1 processors

A. algorithm
B. communication
C. concurrency
D. receiver
Answer» D. receiver
199.

Data items must be combined piece-wise and the result made available at

A. target processor finally
B. target variable finatlalyrget receiver finally
Answer» B. target variable finatlalyrget receiver finally
200.

the dual of one -to-all is

A. all-to-one reduction
B. one -to-all reduction
C. pnoint -to-point reducntion
D. none
Answer» B. one -to-all reduction