Explore topic-wise MCQs in Testing Subject.

This section includes 657 Mcqs, each offering curated multiple-choice questions to sharpen your Testing Subject knowledge and support exam preparation. Choose a topic below to get started.

1.

one processor has a piece of data and it need to send to everyone is

A. one -to-all
B. all-to-one
C. point -to-point
D. all of above
Answer» B. all-to-one
2.

Group communication operations are built using_____ Messenging primitives.

A. point-to-point
B. one-to-all
C. all-to-one
D. none
Answer» B. one-to-all
3.

efficient use of basic communication operations can reduce

A. development effort and
B. software quality
C. both
D. none
Answer» B. software quality
4.

efficient implementation of basic communication operation can improve

A. performance
B. communication
C. algorithm
D. all
Answer» B. communication
5.

many interactions in oractical parallel programs occur in _____ pattern

A. well defined
B. zig-zac
C. reverse
D. straight
Answer» B. zig-zac
6.

C(W)__Θ(W) for optimality (necessary condition).

A. >
B. <
C. <=
D. equals
Answer» E.
7.

For a problem consisting of W units of work, p__W processors can be used optimally.

A. <=
B. >=
C. <
D. >
Answer» B. >=
8.

A parallel algorithm is evaluated by its runtime in function of

A. the input size,
B. the number of processors,
C. the communication parameters.
D. all
Answer» E.
9.

The load imbalance problem in Parallel Gaussian Elimination: can be alleviated by using a ____ mapping

A. acyclic
B. cyclic
C. both
D. none
Answer» C. both
10.

the cost of the parallel algorithm is higher than the sequential run time by a factor of __

A. 2020-03-02 00:00:00
B. 2020-02-03 00:00:00
C. 3*2
D. 2/3+3/2
Answer» B. 2020-02-03 00:00:00
11.

In the Pipelined Execution, steps contain

A. normalization
B. communication
C. elimination
D. all
Answer» E.
12.

In DNS algorithm of matrix multiplication it used

A. 1d partition
B. 2d partition
C. 3d partition
D. both a,b
Answer» D. both a,b
13.

how many basic communication operations are used in matrix vector multiplication

A. 1
B. 2
C. 3
D. 4
Answer» D. 4
14.

The n × n matrix is partitioned among n2 processors such that each processor owns a _____ element.

A. n
B. 2n
C. single
D. double
Answer» D. double
15.

cost-optimal parallel systems have an efficiency of ___

A. 1
B. n
C. logn
D. complex
Answer» B. n
16.

The n × n matrix is partitioned among n processors, with each processor storing complete ___ of the matrix.

A. row
B. column
C. both
D. depend on processor
Answer» B. column
17.

Speedup obtained when the problem size is _______ linearly with the number of processing elements.

A. increase
B. constant
C. decreases
D. depend on problem size
Answer» B. constant
18.

Speedup tends to saturate and efficiency _____ as a consequence of Amdahl’s law.

A. increase
B. constant
C. decreases
D. none
Answer» D. none
19.

Scaling Characteristics of Parallel Programs Ts is

A. increase
B. constant
C. decreases
D. none
Answer» C. decreases
20.

Cost of a parallel system is sometimes referred to____ of product

A. work
B. processor time
C. both
D. none
Answer» D. none
21.

mathematically efficiency is

A. e=s/p
B. e=p/s
C. e*s=p/2
D. e=p+e/e
Answer» B. e=p/s
22.

Which of the following is not true about comparison based sorting algorithms?

A. the minimum possible time complexity of a comparison based sorting algorithm is o(nlogn) for a random input array
B. any comparison based sorting algorithm can be made stable by using position as a criteria when two elements are compared
C. counting sort is not a comparison based sorting algortihm
D. heap sort is not a comparison based sorting algorithm.
Answer» E.
23.

Which of the following is not a stable sorting algorithm in its typical implementation.

A. insertion sort
B. merge sort
C. quick sort
D. bubble sort
Answer» D. bubble sort
24.

Is Best First Search a searching algorithm used in graphs.

A. true
B. false
Answer» B. false
25.

In BFS, how many times a node is visited?

A. once
B. twice
C. equivalent to number of indegree of the node
D. thrice
Answer» D. thrice
26.

Which of the following is not an application of Breadth First Search?

A. when the graph is a binary tree
B. when the graph is a linked list
C. when the graph is a n-ary tree
D. when the graph is a ternary tree
Answer» C. when the graph is a n-ary tree
27.

Time Complexity of Breadth First Search is? (V – number of vertices, E – number of edges)

A. o(v + e)
B. o(v)
C. o(e)
D. o(v*e)
Answer» B. o(v)
28.

Breadth First Search is equivalent to which of the traversal in the Binary Trees?

A. pre-order traversal
B. post-order traversal
C. level-order traversal
D. in-order traversal
Answer» D. in-order traversal
29.

Graph search involves a closed list, where the major operation is a _______

A. sorting
B. searching
C. lookup
D. none of above
Answer» D. none of above
30.

The critical issue in parallel depth-first search algorithms is the distribution of the search space among the processors.

A. true
B. false
Answer» B. false
31.

The search overhead factor of the parallel system is defined as the ratio of the work done by the parallel formulation to that done by the sequential formulation

A. true
B. false
Answer» B. false
32.

If the heuristic is admissible, the BFS finds the optimal solution.

A. true
B. false
Answer» B. false
33.

_____ algorithms use a heuristic to guide search.

A. bfs
B. dfs
C. a and b
D. none of above
Answer» B. dfs
34.

The main advantage of ______ is that its storage requirement is linear in the depth of the state space being searched.

A. bfs
B. dfs
C. a and b
D. none of above
Answer» C. a and b
35.

the complexity of quicksort is O(nlog n).

A. true
B. false
Answer» B. false
36.

The performance of quicksort depends critically on the quality of the ______-.

A. non-pivote
B. pivot
C. center element
D. len of array
Answer» C. center element
37.

Quicksort is one of the most common sorting algorithms for sequential computers because of its simplicity, low overhead, and optimal average complexity.

A. true
B. false
Answer» B. false
38.

Bubble sort is difficult to parallelize since the algorithm has no concurrency.

A. true
B. false
Answer» B. false
39.

The complexity of bubble sort is Θ(n2).

A. true
B. false
Answer» B. false
40.

The fundamental operation of comparison-based sorting is ________.

A. compare-exchange
B. searching
C. sorting
D. swapping
Answer» B. searching
41.

______ can be comparison-based or noncomparison-based.

A. searching
B. sorting
C. both a and b
D. none of above
Answer» C. both a and b
42.

______________ algorithms use auxiliary storage (such as tapes and hard disks) for sorting because the number of elements to be sorted is too large to fit into memory.

A. internal sorting
B. internal searching
C. external sorting
D. external searching
Answer» D. external searching
43.

In ___________, the number of elements to be sorted is small enough to fit into the process's main memory.

A. internal sorting
B. internal searching
C. external sorting
D. external searching
Answer» B. internal searching
44.

What makes a CUDA code runs in parallel

A. __global__ indicates parallel execution of code
B. main() function indicates parallel execution of code
C. kernel name outside triple angle bracket indicates excecution of kernel n times in parallel
D. first parameter value inside triple angle bracket (n) indicates excecution of kernel n times in parallel
Answer» E.
45.

Triple angle brackets mark in a statement inside main function, what does it indicates?

A. a call from host code to device code
B. a call from device code to host code
C. less than comparison
D. greater than comparison
Answer» B. a call from device code to host code
46.

If variable a is host variable and dev_a is a device (GPU) variable, to copy input from variable a to variable dev_a select correct statement:

A. memcpy( dev_a, &a, size);
B. cudamemcpy( dev_a, &a, size, cudamemcpyhosttodevice );
C. memcpy( (void*) dev_a, &a, size);
D. cudamemcpy( (void*) &dev_a, &a, size, cudamemcpydevicetohost );
Answer» C. memcpy( (void*) dev_a, &a, size);
47.

If variable a is host variable and dev_a is a device (GPU) variable, to allocate memory to dev_a select correct statement:

A. cudamalloc( &dev_a, sizeof( int ) )
B. malloc( &dev_a, sizeof( int ) )
C. cudamalloc( (void**) &dev_a, sizeof( int ) )
D. malloc( (void**) &dev_a, sizeof( int ) )
Answer» D. malloc( (void**) &dev_a, sizeof( int ) )
48.

A simple kernel for adding two integers: __global__ void add( int *a, int *b, int *c ) { *c = *a + *b; } where __global__ is a CUDA C keyword which indicates that:

A. add() will execute on device, add() will be called from host
B. add() will execute on host, add() will be called from device
C. add() will be called and executed on host
D. add() will be called and executed on device
Answer» B. add() will execute on host, add() will be called from device
49.

Which function runs on Device (i.e. GPU): a) __global__ void kernel (void ) { } b) int main ( void ) { ... return 0; }

A. a
B. b
C. both a,b
D. ---
Answer» B. b
50.

What is the equivalent of general C program with CUDA C: int main(void) { printf("Hello, World!\n"); return 0; }

A. int main ( void ) { kernel <<<1,1>>>(); printf("hello, world!\\n"); return 0; }
B. __global__ void kernel( void ) { } int main ( void ) { kernel <<<1,1>>>(); printf("hello, world!\\n"); return 0; }
C. __global__ void kernel( void ) { kernel <<<1,1>>>(); printf("hello, world!\\n"); return 0; }
D. __global__ int main ( void ) { kernel <<<1,1>>>(); printf("hello, world!\\n"); return 0; }
Answer» C. __global__ void kernel( void ) { kernel <<<1,1>>>(); printf("hello, world!\\n"); return 0; }