MCQOPTIONS
Saved Bookmarks
This section includes 445 Mcqs, each offering curated multiple-choice questions to sharpen your Computer Science Engineering (CSE) knowledge and support exam preparation. Choose a topic below to get started.
| 201. |
one processor has a piece of data and it need to send to everyone is |
| A. | one -to-all |
| B. | all-to-one |
| C. | point -to-point |
| D. | all of above |
| Answer» B. all-to-one | |
| 202. |
Group communication operations are built using_____ Messenging primitives. |
| A. | point-to-point |
| B. | one-to-all |
| C. | all-to-one |
| D. | none |
| Answer» B. one-to-all | |
| 203. |
efficient use of basic communication operations can reduce |
| A. | development effort and |
| B. | software quality |
| C. | both |
| D. | none |
| Answer» B. software quality | |
| 204. |
efficient implementation of basic communication operation can improve |
| A. | performance |
| B. | communication |
| C. | algorithm |
| D. | all |
| Answer» B. communication | |
| 205. |
many interactions in oractical parallel programs occur in _____ pattern |
| A. | well defined |
| B. | zig-zac |
| C. | reverse |
| D. | straight |
| Answer» B. zig-zac | |
| 206. |
C(W)__Θ(W) for optimality (necessary condition). |
| A. | > |
| B. | < |
| C. | <= |
| D. | equals |
| Answer» E. | |
| 207. |
For a problem consisting of W units of work, p__W processors can be used optimally. |
| A. | <= |
| B. | >= |
| C. | < |
| D. | > |
| Answer» B. >= | |
| 208. |
A parallel algorithm is evaluated by its runtime in function of |
| A. | the input size, |
| B. | the number of processors, |
| C. | the communication parameters. |
| D. | all |
| Answer» E. | |
| 209. |
The load imbalance problem in Parallel Gaussian Elimination: can be alleviated by using a ____ mapping |
| A. | acyclic |
| B. | cyclic |
| C. | both |
| D. | none |
| Answer» C. both | |
| 210. |
the cost of the parallel algorithm is higher than the sequential run time by a factor of __ |
| A. | 2020-03-02 00:00:00 |
| B. | 2020-02-03 00:00:00 |
| C. | 3*2 |
| D. | 2/3+3/2 |
| Answer» B. 2020-02-03 00:00:00 | |
| 211. |
In the Pipelined Execution, steps contain |
| A. | normalization |
| B. | communication |
| C. | elimination |
| D. | all |
| Answer» E. | |
| 212. |
In DNS algorithm of matrix multiplication it used |
| A. | 1d partition |
| B. | 2d partition |
| C. | 3d partition |
| D. | both a,b |
| Answer» D. both a,b | |
| 213. |
how many basic communication operations are used in matrix vector multiplication |
| A. | 1 |
| B. | 2 |
| C. | 3 |
| D. | 4 |
| Answer» D. 4 | |
| 214. |
The n × n matrix is partitioned among n2 processors such that each processor owns a _____ element. |
| A. | n |
| B. | 2n |
| C. | single |
| D. | double |
| Answer» D. double | |
| 215. |
cost-optimal parallel systems have an efficiency of ___ |
| A. | 1 |
| B. | n |
| C. | logn |
| D. | complex |
| Answer» B. n | |
| 216. |
The n × n matrix is partitioned among n processors, with each processor storing complete ___ of the matrix. |
| A. | row |
| B. | column |
| C. | both |
| D. | depend on processor |
| Answer» B. column | |
| 217. |
Speedup obtained when the problem size is _______ linearly with the number of processing elements. |
| A. | increase |
| B. | constant |
| C. | decreases |
| D. | depend on problem size |
| Answer» B. constant | |
| 218. |
Speedup tends to saturate and efficiency _____ as a consequence of Amdahl’s law. |
| A. | increase |
| B. | constant |
| C. | decreases |
| D. | none |
| Answer» D. none | |
| 219. |
Scaling Characteristics of Parallel Programs Ts is |
| A. | increase |
| B. | constant |
| C. | decreases |
| D. | none |
| Answer» C. decreases | |
| 220. |
Cost of a parallel system is sometimes referred to____ of product |
| A. | work |
| B. | processor time |
| C. | both |
| D. | none |
| Answer» D. none | |
| 221. |
mathematically efficiency is |
| A. | e=s/p |
| B. | e=p/s |
| C. | e*s=p/2 |
| D. | e=p+e/e |
| Answer» B. e=p/s | |
| 222. |
Which of the following is not true about comparison based sorting algorithms? |
| A. | the minimum possible time complexity of a comparison based sorting algorithm is o(nlogn) for a random input array |
| B. | any comparison based sorting algorithm can be made stable by using position as a criteria when two elements are compared |
| C. | counting sort is not a comparison based sorting algortihm |
| D. | heap sort is not a comparison based sorting algorithm. |
| Answer» E. | |
| 223. |
Which of the following is not a stable sorting algorithm in its typical implementation. |
| A. | insertion sort |
| B. | merge sort |
| C. | quick sort |
| D. | bubble sort |
| Answer» D. bubble sort | |
| 224. |
Is Best First Search a searching algorithm used in graphs. |
| A. | true |
| B. | false |
| Answer» B. false | |
| 225. |
In BFS, how many times a node is visited? |
| A. | once |
| B. | twice |
| C. | equivalent to number of indegree of the node |
| D. | thrice |
| Answer» D. thrice | |
| 226. |
Which of the following is not an application of Breadth First Search? |
| A. | when the graph is a binary tree |
| B. | when the graph is a linked list |
| C. | when the graph is a n-ary tree |
| D. | when the graph is a ternary tree |
| Answer» C. when the graph is a n-ary tree | |
| 227. |
Time Complexity of Breadth First Search is? (V – number of vertices, E – number of edges) |
| A. | o(v + e) |
| B. | o(v) |
| C. | o(e) |
| D. | o(v*e) |
| Answer» B. o(v) | |
| 228. |
Breadth First Search is equivalent to which of the traversal in the Binary Trees? |
| A. | pre-order traversal |
| B. | post-order traversal |
| C. | level-order traversal |
| D. | in-order traversal |
| Answer» D. in-order traversal | |
| 229. |
Graph search involves a closed list, where the major operation is a _______ |
| A. | sorting |
| B. | searching |
| C. | lookup |
| D. | none of above |
| Answer» D. none of above | |
| 230. |
The critical issue in parallel depth-first search algorithms is the distribution of the search space among the processors. |
| A. | true |
| B. | false |
| Answer» B. false | |
| 231. |
The search overhead factor of the parallel system is defined as the ratio of the work done by the parallel formulation to that done by the sequential formulation |
| A. | true |
| B. | false |
| Answer» B. false | |
| 232. |
If the heuristic is admissible, the BFS finds the optimal solution. |
| A. | true |
| B. | false |
| Answer» B. false | |
| 233. |
_____ algorithms use a heuristic to guide search. |
| A. | bfs |
| B. | dfs |
| C. | a and b |
| D. | none of above |
| Answer» B. dfs | |
| 234. |
The main advantage of ______ is that its storage requirement is linear in the depth of the state space being searched. |
| A. | bfs |
| B. | dfs |
| C. | a and b |
| D. | none of above |
| Answer» C. a and b | |
| 235. |
the complexity of quicksort is O(nlog n). |
| A. | true |
| B. | false |
| Answer» B. false | |
| 236. |
The performance of quicksort depends critically on the quality of the ______-. |
| A. | non-pivote |
| B. | pivot |
| C. | center element |
| D. | len of array |
| Answer» C. center element | |
| 237. |
Quicksort is one of the most common sorting algorithms for sequential computers because of its simplicity, low overhead, and optimal average complexity. |
| A. | true |
| B. | false |
| Answer» B. false | |
| 238. |
Bubble sort is difficult to parallelize since the algorithm has no concurrency. |
| A. | true |
| B. | false |
| Answer» B. false | |
| 239. |
The complexity of bubble sort is Θ(n2). |
| A. | true |
| B. | false |
| Answer» B. false | |
| 240. |
The fundamental operation of comparison-based sorting is ________. |
| A. | compare-exchange |
| B. | searching |
| C. | sorting |
| D. | swapping |
| Answer» B. searching | |
| 241. |
______ can be comparison-based or noncomparison-based. |
| A. | searching |
| B. | sorting |
| C. | both a and b |
| D. | none of above |
| Answer» C. both a and b | |
| 242. |
______________ algorithms use auxiliary storage (such as tapes and hard disks) for sorting because the number of elements to be sorted is too large to fit into memory. |
| A. | internal sorting |
| B. | internal searching |
| C. | external sorting |
| D. | external searching |
| Answer» D. external searching | |
| 243. |
In ___________, the number of elements to be sorted is small enough to fit into the process's main memory. |
| A. | internal sorting |
| B. | internal searching |
| C. | external sorting |
| D. | external searching |
| Answer» B. internal searching | |
| 244. |
What makes a CUDA code runs in parallel |
| A. | __global__ indicates parallel execution of code |
| B. | main() function indicates parallel execution of code |
| C. | kernel name outside triple angle bracket indicates excecution of kernel n times in parallel |
| D. | first parameter value inside triple angle bracket (n) indicates excecution of kernel n times in parallel |
| Answer» E. | |
| 245. |
Triple angle brackets mark in a statement inside main function, what does it indicates? |
| A. | a call from host code to device code |
| B. | a call from device code to host code |
| C. | less than comparison |
| D. | greater than comparison |
| Answer» B. a call from device code to host code | |
| 246. |
If variable a is host variable and dev_a is a device (GPU) variable, to copy input from variable a to variable dev_a select correct statement: |
| A. | memcpy( dev_a, &a, size); |
| B. | cudamemcpy( dev_a, &a, size, cudamemcpyhosttodevice ); |
| C. | memcpy( (void*) dev_a, &a, size); |
| D. | cudamemcpy( (void*) &dev_a, &a, size, cudamemcpydevicetohost ); |
| Answer» C. memcpy( (void*) dev_a, &a, size); | |
| 247. |
If variable a is host variable and dev_a is a device (GPU) variable, to allocate memory to dev_a select correct statement: |
| A. | cudamalloc( &dev_a, sizeof( int ) ) |
| B. | malloc( &dev_a, sizeof( int ) ) |
| C. | cudamalloc( (void**) &dev_a, sizeof( int ) ) |
| D. | malloc( (void**) &dev_a, sizeof( int ) ) |
| Answer» D. malloc( (void**) &dev_a, sizeof( int ) ) | |
| 248. |
A simple kernel for adding two integers: __global__ void add( int *a, int *b, int *c ) { *c = *a + *b; } where __global__ is a CUDA C keyword which indicates that: |
| A. | add() will execute on device, add() will be called from host |
| B. | add() will execute on host, add() will be called from device |
| C. | add() will be called and executed on host |
| D. | add() will be called and executed on device |
| Answer» B. add() will execute on host, add() will be called from device | |
| 249. |
Which function runs on Device (i.e. GPU): a) __global__ void kernel (void ) { } b) int main ( void ) { ... return 0; } |
| A. | a |
| B. | b |
| C. | both a,b |
| D. | --- |
| Answer» B. b | |
| 250. |
What is the equivalent of general C program with CUDA C: int main(void) { printf("Hello, World!\n"); return 0; } |
| A. | int main ( void ) { kernel <<<1,1>>>(); printf("hello, world!\\n"); return 0; } |
| B. | __global__ void kernel( void ) { } int main ( void ) { kernel <<<1,1>>>(); printf("hello, world!\\n"); return 0; } |
| C. | __global__ void kernel( void ) { kernel <<<1,1>>>(); printf("hello, world!\\n"); return 0; } |
| D. | __global__ int main ( void ) { kernel <<<1,1>>>(); printf("hello, world!\\n"); return 0; } |
| Answer» C. __global__ void kernel( void ) { kernel <<<1,1>>>(); printf("hello, world!\\n"); return 0; } | |