MCQOPTIONS
Saved Bookmarks
This section includes 131 Mcqs, each offering curated multiple-choice questions to sharpen your Cloud Computing knowledge and support exam preparation. Choose a topic below to get started.
| 1. |
Parallel programs: Which speedup could be achieved according to Amdahl´s law for infinite number of processors if 5% of a program is sequential and the remaining part is ideally parallel? |
| A. | 10 |
| B. | 20 |
| C. | 30 |
| D. | 40 |
| Answer» C. 30 | |
| 2. |
All deadlocks involve conflicting needs for __________ |
| A. | Resources |
| B. | Users |
| C. | Computers |
| D. | Programs |
| Answer» B. Users | |
| 3. |
______________ sent to false and continue in the loop. |
| A. | work_request |
| B. | My_avail_tour_count |
| C. | Fulfill_request |
| D. | Split_stack packs |
| Answer» B. My_avail_tour_count | |
| 4. |
A situation where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which access takes place is called: |
| A. | data consistency |
| B. | race condition |
| C. | aging |
| D. | starvation |
| Answer» C. aging | |
| 5. |
Each node of the tree has an_________________ , that is, the cost of the partialtour. |
| A. | Euler‘s method |
| B. | associated cost |
| C. | three-dimensional problems |
| D. | fast function |
| Answer» B. associated cost | |
| 6. |
During the execution of the instructions, a copy of the instructions isplaced in the ______ . |
| A. | Register |
| B. | RAM |
| C. | System heap |
| D. | Cache |
| Answer» E. | |
| 7. |
The ____________ directive ensures that a specific memory location is updated atomically, rather than exposing it to the possibility of multiple, simultaneous writing threads. |
| A. | Parallel |
| B. | For |
| C. | atomic |
| D. | Sections |
| Answer» D. Sections | |
| 8. |
What are the scoping clauses in OpenMP _________ |
| A. | Shared Variables & Private Variables |
| B. | Shared Variables |
| C. | Private Variables |
| D. | None of the above |
| Answer» B. Shared Variables | |
| 9. |
The ____________is implemented more efficiently than a general parallelregion containing possibly several loops. |
| A. | Sections |
| B. | Parallel Do/For |
| C. | Parallel sections |
| D. | Critical |
| Answer» C. Parallel sections | |
| 10. |
The processes exchange partial results instead of using onewaycommunications. Such a communication pattern is sometimes called a ___________. |
| A. | butterfly |
| B. | broadcast |
| C. | Data Movement |
| D. | Synchronization |
| Answer» B. broadcast | |
| 11. |
___________________ initializes each private copy with the corresponding valuefrom the master thread. |
| A. | Firstprivate |
| B. | lastprivate |
| C. | nowait |
| D. | Private (OpenMP) and reduction. |
| Answer» B. lastprivate | |
| 12. |
Which of the following is not valid with reference to Message PassingInterface (MPI)? |
| A. | MPI can run on any hardware platform |
| B. | The programming model is a distributed memory model |
| C. | All parallelism is implicit |
| D. | MPI - Comm - Size returns the total number of MPI processes in specified communication |
| Answer» D. MPI - Comm - Size returns the total number of MPI processes in specified communication | |
| 13. |
The ______________ specifies that the iterations of the for loop should beexecuted in parallel by multiple threads. |
| A. | Sections construct |
| B. | for pragma |
| C. | Single construct |
| D. | Parallel for construct |
| Answer» C. Single construct | |
| 14. |
The expression 'delayed load' is used in context of |
| A. | processor-printer communication |
| B. | memory-monitor communication |
| C. | pipelining |
| D. | none of the above |
| Answer» D. none of the above | |
| 15. |
If a process is executing in its critical section, then no other processes canbe executing in their critical section. This condition is called ___________. |
| A. | Out-of-order execution |
| B. | Hardware prefetching |
| C. | Software prefetching |
| D. | mutual exclusion |
| Answer» E. | |
| 16. |
A semaphore is a shared integer variable ____________. |
| A. | lightweight process |
| B. | that cannot drop below zero |
| C. | program counter |
| D. | stack space |
| Answer» C. program counter | |
| 17. |
Use the _________ library function to determine if nested parallel regions areenabled. |
| A. | Omp_target() |
| B. | Omp_declare target() |
| C. | Omp_target data() |
| D. | omp_get_nested() |
| Answer» E. | |
| 18. |
A ______________ construct by itself creates a “single program multiple data”program, i.e., each thread executes the same code. |
| A. | Parallel |
| B. | Section |
| C. | Single |
| D. | Master |
| Answer» B. Section | |
| 19. |
The signal operation of the semaphore basically works on the basic _______system call. |
| A. | continue() |
| B. | wakeup() |
| C. | getup() |
| D. | start() |
| Answer» C. getup() | |
| 20. |
Bus switches are present in ____________ |
| A. | bus window technique |
| B. | crossbar switching |
| C. | linked input/output |
| D. | shared bus |
| Answer» C. linked input/output | |
| 21. |
All nodes in each dimension form a linear array, in the __________. |
| A. | Star topology |
| B. | Ring topology |
| C. | Connect topology |
| D. | Mesh topology |
| Answer» E. | |
| 22. |
In MPI, a ______________ can be used to represent any collection of data items in memory by storing both the types of the items and their relative locations in memory. |
| A. | Allgather |
| B. | derived datatype |
| C. | displacement |
| D. | beginning |
| Answer» C. displacement | |
| 23. |
MPI provides a function ________, for packing data into a buffer of contiguousmemory. |
| A. | MPI_Pack |
| B. | MPI_UnPack |
| C. | MPI_Pack Count |
| D. | MPI_Packed |
| Answer» B. MPI_UnPack | |
| 24. |
The ____________ is the distributed-memory version of the OpenMP busywait loop. |
| A. | For loop |
| B. | while(1) loop |
| C. | Do while loop |
| D. | Empty |
| Answer» C. Do while loop | |
| 25. |
_________________ generate log files of MPI calls. |
| A. | mpicxx |
| B. | mpilog |
| C. | mpitrace |
| D. | mpianim |
| Answer» C. mpitrace | |
| 26. |
A pipeline is like _______________ |
| A. | an automobile assembly line |
| B. | house pipeline |
| C. | both a and b |
| D. | a gas line |
| Answer» B. house pipeline | |
| 27. |
A remote node is being node which has a copy of a ______________ |
| A. | Home block |
| B. | Guest block |
| C. | Remote block |
| D. | Cache block |
| Answer» E. | |
| 28. |
How many assembly instructions does the following C instruction take?global_count += 5; |
| A. | 4 instructions |
| B. | 3 instructions |
| C. | 5 instructions |
| D. | 2 instructions |
| Answer» B. 3 instructions | |
| 29. |
A counting semaphore was initialized to 10. Then 6 P (wait) operations and 4V (signal) operations were completed on this semaphore. The resulting value of the semaphore is ___________ |
| A. | 4 |
| B. | 6 |
| C. | 9 |
| D. | 8 |
| Answer» E. | |
| 30. |
When compiling an OpenMP program with gcc, what flag must be included? |
| A. | -fopenmp |
| B. | #pragma omp parallel |
| C. | –o hello |
| D. | ./openmp |
| Answer» B. #pragma omp parallel | |
| 31. |
A ____________ in OpenMP is just some text that modifies a directive. |
| A. | data environment |
| B. | clause |
| C. | task |
| D. | Master thread |
| Answer» C. task | |
| 32. |
__________________is the principal alternative to shared memory parallelprogramming. |
| A. | Multiple passing |
| B. | Message passing |
| C. | Message programming |
| D. | None of the above |
| Answer» C. Message programming | |
| 33. |
A collection of lines that connects several devices is called ______________ |
| A. | bus |
| B. | peripheral connection wires |
| C. | Both a and b |
| D. | internal wires |
| Answer» B. peripheral connection wires | |
| 34. |
If the semaphore value is negative ____________. |
| A. | its magnitude is the number of processes waiting on that semaphore |
| B. | it is invalid |
| C. | no operation can be further performed on it until the signal operation is performed on it |
| D. | none of the mentioned |
| Answer» B. it is invalid | |
| 35. |
To ensure difficulties do not arise in the readers – writer’s problem, _______are given exclusive access to the shared object. |
| A. | readers |
| B. | writers |
| C. | readers and writers |
| D. | none of the above |
| Answer» C. readers and writers | |
| 36. |
Systems that do not have parallel processing capabilities are ______________ |
| A. | SISD |
| B. | MIMD |
| C. | SIMD |
| D. | MISD |
| Answer» B. MIMD | |
| 37. |
__________________ is a nonnegative integer that the destination can use toselectively screen messages. |
| A. | Dest |
| B. | Type |
| C. | Address |
| D. | length |
| Answer» C. Address | |
| 38. |
A _____________ function is called by Fulfillrequest. |
| A. | descendants |
| B. | Splitstack |
| C. | dynamic mapping scheme |
| D. | ancestors |
| Answer» C. dynamic mapping scheme | |
| 39. |
What are Spinlocks? |
| A. | CPU cycles wasting locks over critical sections of programs |
| B. | Locks that avoid time wastage in context switches |
| C. | Locks that work better on multiprocessor systems |
| D. | All of the mentioned |
| Answer» E. | |
| 40. |
MPI specifies the functionality of _________________ communication routines. |
| A. | High-level |
| B. | Low-level |
| C. | Intermediate-level |
| D. | Expert-level |
| Answer» B. Low-level | |
| 41. |
Synchronization is one of the common issues in parallelprogramming. The issues related to synchronization include the followings, EXCEPT: |
| A. | Deadlock |
| B. | Livelock |
| C. | Fairness |
| D. | Correctness |
| Answer» E. | |
| 42. |
A ___________ construct must be enclosed within a parallel region in orderfor the directive to execute in parallel. |
| A. | Parallel sections |
| B. | Critical |
| C. | Single |
| D. | work-sharing |
| Answer» E. | |
| 43. |
What is the main disadvantage of spinlocks? |
| A. | they are not sufficient for many process |
| B. | they require busy waiting |
| C. | they are unreliable sometimes |
| D. | they are too complex for programmers |
| Answer» C. they are unreliable sometimes | |
| 44. |
A collective communication in which data belonging to a single process is sent to all of the processes in the communicator is called a _________. |
| A. | broadcast |
| B. | reductions |
| C. | Scatter |
| D. | Gather |
| Answer» B. reductions | |
| 45. |
_______________ specifies that the iteration of the loop must be executed asthey would be in serial program. |
| A. | Nowait |
| B. | Ordered |
| C. | Collapse |
| D. | for loops |
| Answer» C. Collapse | |
| 46. |
________________ takes the data in data to be packed and packs it intocontig_buf. |
| A. | MPI Unpack |
| B. | MPI_Pack |
| C. | MPI_Datatype |
| D. | MPI_Comm |
| Answer» C. MPI_Datatype | |
| 47. |
Producer consumer problem can be solved using _____________ |
| A. | semaphores |
| B. | event counters |
| C. | monitors |
| D. | All of the above |
| Answer» D. All of the above | |
| 48. |
_____________ begins by checking on the number of tours that the processhas in its stack. |
| A. | Terminated |
| B. | Send rejects |
| C. | Receive rejects |
| D. | Empty |
| Answer» B. Send rejects | |
| 49. |
For the reduced n-body solver, a ________________ will best distribute theworkload in the computation of the forces. |
| A. | cyclic distribution |
| B. | velocity of each particle |
| C. | universal gravitation |
| D. | gravitational constant |
| Answer» B. velocity of each particle | |
| 50. |
What are the algorithms for identifying which subtrees we assign to theprocesses or threads __________ |
| A. | breadth-first search |
| B. | depth-first search |
| C. | depth-first search breadth-first search |
| D. | None of the above |
| Answer» D. None of the above | |