

MCQOPTIONS
Saved Bookmarks
This section includes 445 Mcqs, each offering curated multiple-choice questions to sharpen your Computer Science Engineering (CSE) knowledge and support exam preparation. Choose a topic below to get started.
1. |
Which of following is NOT A WAY of mapping the input wires of the bitonicsorting network to a MESH of processes |
A. | row major mapping |
B. | column major mapping |
C. | row major snakelike mapping |
D. | row major shuffled mapping |
Answer» C. row major snakelike mapping | |
2. |
The time taken by all-to- all broadcast on a mesh is. |
A. | t= (ts + twm)(p-1) |
B. | t= ts logp + twm(p-1) |
C. | t= 2ts(√p – 1) - twm(p-1) |
D. | t= 2ts(√p – 1) + twm(p-1) |
Answer» B. t= ts logp + twm(p-1) | |
3. |
The n × n matrix is partitioned among n2 processors such that each processor owns a _____ element. |
A. | n |
B. | 2n |
C. | single |
D. | double |
Answer» D. double | |
4. |
In all-to-one reduction, data items must be combined piece-wise and the result made available at a processor. |
A. | last |
B. | target |
C. | n-1 |
D. | first |
Answer» D. first | |
5. |
All-to-All Broadcast and Reduction algorithm on a Ring terminates in steps. |
A. | p+1 |
B. | p-1 |
C. | p*p |
D. | p |
Answer» D. p | |
6. |
The time taken by all-to- all broadcast on a ring is. |
A. | t= (ts + twm)(p-1) |
B. | t= ts logp + twm(p-1) |
C. | t= 2ts(√p – 1) - twm(p-1) |
D. | t= 2ts(√p – 1) + twm(p-1) |
Answer» C. t= 2ts(√p – 1) - twm(p-1) | |
7. |
The complexity of bubble sort is Θ(n2). |
A. | true |
B. | false |
Answer» B. false | |
8. |
In All-to-All Personalized Communication on a Ring, the size of the message reduces by at each step |
A. | p |
B. | m-1 |
C. | p-1 |
D. | m |
Answer» B. m-1 | |
9. |
The n × n matrix is partitioned among n processors, with each processor storing complete ___ of the matrix. |
A. | row |
B. | column |
C. | both |
D. | depend on processor |
Answer» B. column | |
10. |
Systems that do not have parallel processing capabilities are |
A. | sisd |
B. | simd |
C. | mimd |
D. | all of the above |
Answer» B. simd | |
11. |
CUDA supports ____________ in which code in a single thread is executed by all other threads. |
A. | tread division |
B. | tread termination |
C. | thread abstraction |
D. | none of above |
Answer» D. none of above | |
12. |
Which is the sorting algorithm in below given steps - 1. procedure X_SORT(n)2. begin3. for i := n - 1 downto 1 do4. for j := 1 to i do5. compare-exchange(aj, aj + 1);6. end X_SORT |
A. | selection sort |
B. | bubble sort |
C. | parallel selcetion sort |
D. | parallel bubble sort |
Answer» C. parallel selcetion sort | |
13. |
Each warp of GPU receives a single instruction and “broadcasts” it to all of its threads. It is a ---- operation. |
A. | simd (single instruction multiple data) |
B. | simt (single instruction multiple thread) |
C. | sisd (single instruction single data) |
D. | sist (single instruction single thread) |
Answer» C. sisd (single instruction single data) | |
14. |
In a eight node ring, node ____ is source of broadcast |
A. | 1 |
B. | 2 |
C. | 8 |
Answer» E. | |
15. |
if "X" is the message to broadcast it initially resides at the source node |
A. | 1 |
B. | 2 |
C. | 8 |
Answer» E. | |
16. |
A CUDA program is comprised of two primary components: a host and a _____. |
A. | gpu??kernel |
B. | cpu??kernel |
C. | os |
D. | none of above |
Answer» B. cpu??kernel | |
17. |
Messages get smaller inand stay constant in . |
A. | gather, broadcast |
B. | scatter , broadcast |
C. | scatter, gather |
D. | broadcast, gather |
Answer» D. broadcast, gather | |
18. |
the BlockPerGrid and ThreadPerBlock parameters are related to the ________ model supported by CUDA. |
A. | host |
B. | kernel |
C. | thread??abstraction |
D. | none of above |
Answer» D. none of above | |
19. |
All-to-all personalized communication is also known as . |
A. | total exchange |
B. | both of the above |
C. | none of the above |
D. | partial exchange |
Answer» C. none of the above | |
20. |
Speedup tends to saturate and efficiency _____ as a consequence of Amdahl’s law. |
A. | increase |
B. | constant |
C. | decreases |
D. | none |
Answer» D. none | |
21. |
Speedup obtained when the problem size is _______ linearlywith the number of processing elements. |
A. | increase |
B. | constant |
C. | decreases |
D. | depend on problem size |
Answer» B. constant | |
22. |
C(W)__Θ(W) for optimality (necessary condition). |
A. | > |
B. | < |
C. | <= |
D. | equals |
Answer» E. | |
23. |
How does the number of transistors per chip increase according to Moore ´s law? |
A. | quadratically |
B. | linearly |
C. | cubicly |
D. | exponentially |
Answer» E. | |
24. |
All nodes collects _____ message corresponding to √p nodes to their respectively |
A. | √p |
B. | p |
C. | p+1 |
D. | p-1 |
Answer» B. p | |
25. |
The time that elapses from the moment the first processor starts to the moment the last processor finishes execution is called as . |
A. | parallel runtime |
B. | overhead runtime |
C. | excess runtime |
D. | serial runtime |
Answer» C. excess runtime | |
26. |
The kernel code is dentified by the ________qualifier with void return type |
A. | _host_ |
B. | __global__?? |
C. | _device_ |
D. | void |
Answer» C. _device_ | |
27. |
CUDA offers the Chevron Syntax to configure and execute a kernel. |
A. | true |
B. | false |
Answer» B. false | |
28. |
Select correct answer: DRAM access times have only improved at the rate of roughly % per year over this interval. |
A. | 20 |
B. | 40 |
C. | 50 |
D. | 10 |
Answer» B. 40 | |
29. |
CUDA Hardware programming model supports: a) fully generally data-parallel archtecture; b) General thread launch;c) Global load-store; d) Parallel data cache; e) Scalar architecture; f) Integers, bit operation |
A. | a,c,d,f |
B. | b,c,d,e |
C. | a,d,e,f |
D. | a,b,c,d,e,f |
Answer» E. | |
30. |
important component of best-first search (BFS) algorithms is |
A. | open list |
B. | closed list |
C. | node list |
D. | mode list |
Answer» B. closed list | |
31. |
identify Load-Balancing Scheme/s |
A. | asynchronous round robin |
B. | global round robin |
C. | random polling |
D. | all above methods |
Answer» E. | |
32. |
A* algorithm is a |
A. | bfs algorithm |
B. | dfs algorithm |
C. | prim\s algorithm |
D. | kruskal\s algorithm |
Answer» B. dfs algorithm | |
33. |
Best-first search (BFS) algorithms can search both graphs and trees. |
A. | true |
B. | false |
Answer» B. false | |
34. |
Simple backtracking is a depth-first search method that terminates upon finding the first solution. |
A. | true |
B. | false |
Answer» B. false | |
35. |
to solve the all-pairs shortest paths problem which algorithm/s is/are used a) Floyd's algorithm b) Dijkstra's single-source shortest paths c) Prim's Algorithm d) Kruskal's Algorithm |
A. | a) and c) |
B. | a) and b) |
C. | b) and c) |
D. | c) and d) |
Answer» C. b) and c) | |
36. |
Graph can be represented by |
A. | identity matrix |
B. | adjacency matrix |
C. | sprse list |
D. | sparse matrix |
Answer» C. sprse list | |
37. |
The space required to store the adjacency matrix of a graph with n vertices is |
A. | in order of n |
B. | in order of n log n |
C. | in order of n squared |
D. | in order of n/2 |
Answer» D. in order of n/2 | |
38. |
A complete graph is a graph in which each pair of vertices is adjacent |
A. | true |
B. | false |
Answer» B. false | |
39. |
In Dijkstra's all pair shortest path each process compute the single-source shortest paths for all vertices assigned to it in SOURCE PARTITIONED FORMULATION |
A. | true |
B. | false |
Answer» B. false | |
40. |
Which formulation of Dijkstra's algorithm exploits more parallelism |
A. | source-partitioned formulation |
B. | source-parallel formulation |
C. | partitioned-parallel formulation |
D. | all of above |
Answer» C. partitioned-parallel formulation | |
41. |
Which Parallel formulation of Quick sort is possible |
A. | shared-address-space parallel formulation |
B. | message passing formulation |
C. | hypercube formulation |
D. | all of the above |
Answer» E. | |
42. |
In execution of the hypercube formulation of quicksort for d = 3, split along -----------dimention to partition sequence into two big blocks, one greater than pivot and other smaller than pivot as shown in diagram |
A. | first |
B. | scond |
C. | third |
D. | none of above |
Answer» D. none of above | |
43. |
In parallel quick sort Pivot selecton strategy is crucial for |
A. | maintaing load balance |
B. | maintaining uniform distribution of elements in process groups |
C. | effective pivot selection in next level |
D. | all of the above |
Answer» E. | |
44. |
Given an array of n elements and p processes, in the message-passing version of the parallel quicksort, each process stores ---------elements of array |
A. | n*p |
B. | n-p |
C. | p/n |
D. | n/p |
Answer» E. | |
45. |
A person wants to visit some places. He starts from a vertex and then wants to visit every vertex till it finishes from one vertex, backtracks and then explore other vertex from same vertex. What algorithm he should use? |
A. | bfs |
B. | dfs |
C. | prim\s |
D. | kruskal\s |
Answer» C. prim\s | |
46. |
Time Complexity of DFS is? (V – number of vertices, E – number of edges) |
A. | o(v + e) |
B. | o(v) |
C. | o(e) |
D. | o(v*e) |
Answer» B. o(v) | |
47. |
In parallel Quick Sort each process divides the unsorted list into |
A. | n lists |
B. | 2 lists |
C. | 4 lists |
D. | n-1 lists |
Answer» C. 4 lists | |
48. |
In parallel Quick Sort Pivot is sent to processes by |
A. | broadcast |
B. | multicast |
C. | selective multicast |
D. | unicast |
Answer» B. multicast | |
49. |
Shell sort is an improvement on |
A. | quick sort |
B. | bubble sort |
C. | insertion sort |
D. | selection sort |
Answer» D. selection sort | |
50. |
Odd-even transposition sort is a variation of |
A. | quick sort |
B. | shell sort |
C. | bubble sort |
D. | selection sort |
Answer» D. selection sort | |