

MCQOPTIONS
Saved Bookmarks
This section includes 445 Mcqs, each offering curated multiple-choice questions to sharpen your Computer Science Engineering (CSE) knowledge and support exam preparation. Choose a topic below to get started.
401. |
Which of the following method is used to avoid Interaction Overheads? |
A. | maximizing data locality |
B. | minimizing data locality |
C. | increase memory size |
D. | none of the above. |
Answer» B. minimizing data locality | |
402. |
Which task decomposition technique is suitable for the 15-puzzle problem? |
A. | data decomposition |
B. | exploratory decomposition |
C. | speculative decomposition |
D. | recursive decomposition |
Answer» C. speculative decomposition | |
403. |
The number and size of tasks into which a problem is decomposed determines the __ |
A. | granularity |
B. | task |
C. | dependency graph |
D. | decomposition |
Answer» B. task | |
404. |
Average Degree of Concurrency is... |
A. | the average number of tasks that can run concurrently over the entire duration of execution of the process. |
B. | the average time that can run concurrently over the entire duration of execution of the process. |
C. | the average in degree of task dependency graph. |
D. | the average out degree of task dependency graph. |
Answer» B. the average time that can run concurrently over the entire duration of execution of the process. | |
405. |
The principal parameters that determine the communication latency are as follows: |
A. | startup time (ts) per-hop time (th) per-word transfer time (tw) |
B. | startup time (ts) per-word transfer time (tw) |
C. | startup time (ts) per-hop time (th) |
D. | startup time (ts) message-packet-size(w) |
Answer» B. startup time (ts) per-word transfer time (tw) | |
406. |
Which is alternative options for latency hiding? |
A. | increase cpu frequency |
B. | multithreading |
C. | increase bandwidth |
D. | increase memory |
Answer» C. increase bandwidth | |
407. |
What is is the ratio of the time taken to solve a problem on a single processor to the time required to solve the same problem on a parallel computer with p identical processing elements? |
A. | overall time |
B. | speedup |
C. | scaleup |
D. | efficiency |
Answer» D. efficiency | |
408. |
______ Communication model is generally seen in tightly coupled system. |
A. | message passing |
B. | shared-address space |
C. | client-server |
D. | distributed network |
Answer» C. client-server | |
409. |
Select how the overhead function (To) is calculated. |
A. | to = p*n tp - ts |
B. | to = p tp - ts |
C. | to = tp - pts |
D. | to = tp - ts |
Answer» D. to = tp - ts | |
410. |
The time that elapses from the moment the first processor starts to the moment the last processor finishes execution is called as                       . |
A. | parallel runtime |
B. | overhead runtime |
C. | excess runtime |
D. | serial runtime |
Answer» C. excess runtime | |
411. |
Select the parameters on which the parallel runtime of a program depends. |
A. | number of processors |
B. | communication parameters of the machine |
C. | all of the above |
D. | input size |
Answer» E. | |
412. |
The prefix-sum operation can be implemented using the kernel |
A. | all-to-all broadcast |
B. | one-to-all broadcast |
C. | all-to-one broadcast |
D. | all-to-all reduction |
Answer» C. all-to-one broadcast | |
413. |
The time taken by all-to- all broadcast on a hypercube is . |
A. | t= (ts + twm)(p-1) |
B. | t= ts logp + twm(p-1) |
C. | t= 2ts(√p – 1) - twm(p-1) |
D. | t= 2ts(√p – 1) + twm(p-1) |
Answer» D. t= 2ts(√p – 1) + twm(p-1) | |
414. |
The time taken by all-to- all broadcast on a mesh is . |
A. | t= (ts + twm)(p-1) |
B. | t= ts logp + twm(p-1) |
C. | t= 2ts(√p – 1) - twm(p-1) |
D. | t= 2ts(√p – 1) + twm(p-1) |
Answer» B. t= ts logp + twm(p-1) | |
415. |
The time taken by all-to- all broadcast on a ring is . |
A. | t= (ts + twm)(p-1) |
B. | t= ts logp + twm(p-1) |
C. | t= 2ts(√p – 1) - twm(p-1) |
D. | t= 2ts(√p – 1) + twm(p-1) |
Answer» C. t= 2ts(√p – 1) - twm(p-1) | |
416. |
Messages get smaller in and stay constant in . |
A. | gather, broadcast |
B. | scatter , broadcast |
C. | scatter, gather |
D. | broadcast, gather |
Answer» D. broadcast, gather | |
417. |
In All-to-all Broadcast on a Mesh, operation performs in which sequence? |
A. | rowwise, columnwise |
B. | columnwise, rowwise |
C. | columnwise, columnwise |
D. | rowwise, rowwise |
Answer» C. columnwise, columnwise | |
418. |
All-to-All Broadcast and Reduction algorithm on a Ring terminates in                   steps. |
A. | p+1 |
B. | p-1 |
C. | p*p |
D. | p |
Answer» D. p | |
419. |
All-to-all personalized communication is performed independently in each row with clustered messages of size on a mesh. |
A. | p |
B. | m√p |
C. | p√m |
D. | m |
Answer» D. m | |
420. |
In All-to-All Personalized Communication on a Ring, the size of the message reduces by              at each step |
A. | p |
B. | m-1 |
C. | p-1 |
D. | m |
Answer» B. m-1 | |
421. |
Analyze the Cost of Scatter and Gather . |
A. | t=ts log p + tw m (p-1) |
B. | t=ts log p - tw m (p-1) |
C. | t=tw log p - ts m (p-1) |
D. | t=tw log p + ts m (p-1) |
Answer» C. t=tw log p - ts m (p-1) | |
422. |
In all-to-one reduction, data items must be combined piece-wise and the result made available at a                    processor. |
A. | last |
B. | target |
C. | n-1 |
D. | first |
Answer» D. first | |
423. |
All-to-all personalized communication is also known as                               . |
A. | total exchange |
B. | both of the above |
C. | none of the above |
D. | partial exchange |
Answer» C. none of the above | |
424. |
Select the appropriate stage of GPU Pipeline which receives commands from CPU and also pulls geometry information from system memory. |
A. | vertex processing |
B. | memory interface |
C. | host interface |
D. | pixel processing |
Answer» E. | |
425. |
Which model is equally suitable to shared-address- space or message- passing paradigms, since the interaction is naturally two ways. |
A. | master slave model |
B. | data parallel model |
C. | producer consumer or pipeline model |
D. | work pool model |
Answer» C. producer consumer or pipeline model | |
426. |
In which type of the model, tasks are dynamically assigned to the processes for balancing the load? |
A. | master slave model |
B. | data parallel model |
C. | producer consumer or pipeline model |
D. | work pool model |
Answer» B. data parallel model | |
427. |
A classic example of game playing - each 15 puzzle board is the example of |
A. | dynamic task generation |
B. | none of the above |
C. | all of the above |
D. | static task generation |
Answer» C. all of the above | |
428. |
In which case, the owner computes rule implies that the output is computed by the process to which the output data is assigned? |
A. | output data decomposition |
B. | both of the above |
C. | none of the above |
D. | input data decomposition |
Answer» C. none of the above | |
429. |
A decomposition can be illustrated in the form of a directed graph with nodes corresponding to tasks and edges indicating that the result of one task is required for processing the next. Such graph is called as |
A. | task dependency graph |
B. | task interaction graph |
C. | process interaction graph |
D. | process dependency graph |
Answer» C. process interaction graph | |
430. |
Select relevant task characteristics from the options given below: |
A. | task sizes |
B. | size of data associated with tasks |
C. | all of the above |
D. | task generation |
Answer» E. | |
431. |
Which of the following projects of Blue Gene is not in development? |
A. | blue gene / m |
B. | blue gene / p |
C. | blue gene / q |
D. | blue gene / l |
Answer» C. blue gene / q | |
432. |
Select which clause in OpenMP is similar to the private, except values of variables are initialized to corresponding values before the |
A. | firstprivate |
B. | shared |
C. | all of the above |
D. | private |
Answer» C. all of the above | |
433. |
Select alternate approaches for Hiding Memory Latency |
A. | multithreading |
B. | spatial locality |
C. | all of the above |
D. | prefeching |
Answer» E. | |
434. |
Consider the example of a fire- hose. If the water comes out of the hose five seconds after the hydrant is turned on. Once the water starts flowing, if the hydrant delivers water at the rate of 15 gallons/second. Analyze the bandwidth and latency. |
A. | bandwidth: 5*15 gallons/second and latency: 15 seconds |
B. | bandwidth: 15 gallons/second and latency: 5 seconds |
C. | bandwidth: 3 gallons/second and latency: 5 seconds |
D. | bandwidth: 5 gallons/second and latency: 15 seconds |
Answer» D. bandwidth: 5 gallons/second and latency: 15 seconds | |
435. |
Select the parameters which captures Memory system performance |
A. | bandwidth |
B. | both of the above |
C. | none of the above |
D. | latency |
Answer» D. latency | |
436. |
Analyze, if the second instruction has data dependencies with the first, but the third instruction does not, the first |
A. | out-of-order |
B. | both of the above |
C. | none of the above |
D. | in-order |
Answer» C. none of the above | |
437. |
Select correct answer: DRAM access times have only improved at the rate of roughly         % per year over this interval. |
A. | 20 |
B. | 40 |
C. | 50 |
D. | 10 |
Answer» B. 40 | |
438. |
Select different aspects of parallelism |
A. | server applications utilize high aggregate network bandwidth |
B. | scientific applications typically utilize high processing and memory system performance |
C. | all of the above |
D. | data intensive applications utilize high aggregate throughput |
Answer» E. | |
439. |
SIMD represents an organization that ______________. |
A. |    refers to a computer system capable of processing      several programs at the same time. |
B. |    represents organization of single computer containing   a control unit, processor unit and a memory unit. |
C. |    includes many processing units under the supervision      of a common control unit |
D. | Â Â Â none of the above. |
Answer» D. Â Â Â none of the above. | |
440. |
Cache memory works on the principle of |
A. | locality of data |
B. | locality of memory |
C. | locality of reference |
D. | locality of reference & memory |
Answer» D. locality of reference & memory | |
441. |
_________ is Callable from the device only |
A. | _host_ |
B. | __global__ |
C. | _device_ |
D. | none of above |
Answer» D. none of above | |
442. |
______ is Callable from the host |
A. | _host_ |
B. | __global__ |
C. | _device_ |
D. | none of above |
Answer» B. __global__ | |
443. |
The kernel code is dentified by the ________qualifier with void return type |
A. | _host_ |
B. | __global__ |
C. | _device_ |
D. | void |
Answer» C. _device_ | |
444. |
A CUDA program is comprised of two primary components: a host and a _____. |
A. | gpu kernel |
B. | cpu kernel |
C. | os |
D. | none of above |
Answer» B. cpu kernel | |
445. |
the BlockPerGrid and ThreadPerBlock parameters are related to the ________ model supported by CUDA. |
A. | host |
B. | kernel |
C. | thread abstraction |
D. | none of above |
Answer» D. none of above | |