MCQOPTIONS
Saved Bookmarks
This section includes 79 Mcqs, each offering curated multiple-choice questions to sharpen your Computer Science Engineering (CSE) knowledge and support exam preparation. Choose a topic below to get started.
| 1. |
is the ability of multiple processes to co-ordinate their activities by exchange of information. |
| A. | . deadlock |
| B. | . synchronization |
| C. | . mutual exclusion |
| D. | . cache |
| Answer» B. . synchronization | |
| 2. |
The specifies that the iterations of the for loop should be executed in parallel by multiple threads. |
| A. | . sections construct |
| B. | . for pragma |
| C. | . single construct |
| D. | . parallel for construct |
| Answer» C. . single construct | |
| 3. |
All nodes in each dimension form a linear array, in the . |
| A. | . star topology |
| B. | . ring topology |
| C. | . connect topology |
| D. | . mesh topology |
| Answer» E. | |
| 4. |
The signal operation of the semaphore basically works on the basic system call. |
| A. | continue() |
| B. | wakeup() |
| C. | getup() |
| D. | start() |
| Answer» C. getup() | |
| 5. |
Each node of the tree has an , that is, the cost of the partial tour. |
| A. | . euler‘s method |
| B. | . associated cost |
| C. | . three-dimensional problems |
| D. | . fast function |
| Answer» D. . fast function | |
| 6. |
During the execution of the instructions, a copy of the instructions is placed in the . |
| A. | . register |
| B. | . ram |
| C. | . system heap |
| D. | . cache |
| Answer» D. . cache | |
| 7. |
Paths that have an unbounded number of allowed nonminimal hops from packet sources, this situation is referred to as . |
| A. | . livelock |
| B. | . deadlock |
| C. | . synchronization |
| D. | . mutual exclusion |
| Answer» D. . mutual exclusion | |
| 8. |
a process is allowed to enter its critical section . |
| A. | . after a process has made a request to enter its critical section and before the request is granted |
| B. | . when another process is in its critical section |
| C. | . before a process has made a request to enter its critical section |
| D. | . none of the mentioned |
| Answer» B. . when another process is in its critical section | |
| 9. |
Spinlocks are intended to provide only. |
| A. | . mutual exclusion |
| B. | . bounded waiting |
| C. | . aging |
| D. | . progress |
| Answer» C. . aging | |
| 10. |
generate log files of MPI calls. |
| A. | mpicxx |
| B. | mpilog |
| C. | mpitrace |
| D. | mpianim |
| Answer» C. mpitrace | |
| 11. |
The operation similarly computes an element-wise reduction of vectors, but this time leaves the result scattered among the processes. |
| A. | reduce-scatter |
| B. | reduce (to-one) |
| C. | allreduce |
| D. | none of the above |
| Answer» B. reduce (to-one) | |
| 12. |
A semaphore is a shared integer variable . |
| A. | . lightweight process |
| B. | . that cannot drop below zero |
| C. | . program counter |
| D. | . stack space |
| Answer» C. . program counter | |
| 13. |
is an object that holds information about the received message, including, for example, it’s actually count. |
| A. | buff |
| B. | count |
| C. | tag |
| D. | status |
| Answer» E. | |
| 14. |
A critical section is a program segment . |
| A. | . where shared resources are accessed |
| B. | . single thread of execution |
| C. | . improves concurrency in multi-core system |
| D. | . lower resource consumption |
| Answer» E. | |
| 15. |
The size of the initial chunksize . |
| A. | . total_no_of_iterations / max_threads |
| B. | . total_no_of_remaining_iterations / max_threads |
| C. | . total_no_of_iterations / no_threads |
| D. | . total_no_of_remaining_iterations / no_threads |
| Answer» B. . total_no_of_remaining_iterations / max_threads | |
| 16. |
addition to the cost of the communication, the packing and unpacking is very . |
| A. | . global least cost |
| B. | . time- consuming |
| C. | . expensive tours |
| D. | . shared stack |
| Answer» D. . shared stack | |
| 17. |
The easiest way to create communicators with new groups is with . |
| A. | mpi_comm_rank |
| B. | mpi_comm_create |
| C. | mpi_comm_split |
| D. | mpi_comm_group |
| Answer» D. mpi_comm_group | |
| 18. |
Let S and Q be two semaphores initialized to 1, where P0 and P1 processes the following statements wait(S);wait(Q); ---; signal(S);signal(Q) and wait(Q); wait(S);---;signal(Q);signal(S); respectively. The above situation depicts a . |
| A. | . livelock |
| B. | . critical section |
| C. | . deadlock |
| D. | . mutual exclusion |
| Answer» D. . mutual exclusion | |
| 19. |
has in its stack. |
| A. | . terminated |
| B. | . send rejects |
| C. | . receive rejects |
| D. | . empty |
| Answer» E. | |
| 20. |
takes the data in data to be packed and packs it into |
| A. | . mpi unpack |
| B. | . mpi_pack |
| C. | . mpi_datatype |
| D. | . mpi_comm |
| Answer» D. . mpi_comm | |
| 21. |
addition to the cost of the communication, the packing and unpacking is very                            . |
| A. | . global least cost |
| B. | . time- consuming |
| C. | . expensive tours |
| D. | . shared stack |
| Answer» D. . shared stack | |
| 22. |
user program and buffersize is its size in bytes. |
| A. | . tour data |
| B. | . node tasks |
| C. | . actual computation |
| D. | . buffer argument |
| Answer» B. . node tasks | |
| 23. |
could possibly lead to a least-cost solution. |
| A. | . depth-first search |
| B. | . foster‘s methodology |
| C. | . reduced algorithm |
| D. | . breadth first search |
| Answer» D. . breadth first search | |
| 24. |
Each node of the tree has an                             , that is, the cost of the partial tour. |
| A. | . euler‘s method |
| B. | . associated cost |
| C. | . three-dimensional problems |
| D. | . fast function |
| Answer» D. . fast function | |
| 25. |
parallelizing them using OpenMP. |
| A. | . thread‘s rank |
| B. | . function loopschedule |
| C. | . pthreads |
| D. | . loop variable |
| Answer» E. | |
| 26. |
workload in the computation of the forces. |
| A. | . cyclic distribution |
| B. | . velocity of each particle |
| C. | . universal gravitation |
| D. | . gravitational constant |
| Answer» C. . universal gravitation | |
| 27. |
Interface (MPI)? |
| A. | . a specification of a shared memory library |
| B. | . mpi uses objects called communicators and groups to define which collection of processes may communicate with each other |
| C. | . only communicators and not groups are accessible to the programmer only by a "handle" |
| D. | . a communicator is an ordered set of processes |
| Answer» D. . a communicator is an ordered set of processes | |
| 28. |
programming. The issues related to synchronization include the followings, EXCEPT: |
| A. | . deadlock |
| B. | . livelock |
| C. | . fairness |
| D. | . correctness |
| Answer» D. . correctness | |
| 29. |
is applicable in ___________________. |
| A. | . amdahl\s law |
| B. | . gustafson-barsis\s law |
| C. | . newton\s law |
| D. | . pascal\s law |
| Answer» B. . gustafson-barsis\s law | |
| 30. |
                         is an object that holds information about the received message, including, for example, it’s actually count. |
| A. | buff |
| B. | count |
| C. | tag |
| D. | status |
| Answer» E. | |
| 31. |
The                          operation similarly computes an element-wise reduction of vectors, but this time leaves the result scattered among the processes. |
| A. | reduce-scatter |
| B. | reduce (to-one) |
| C. | allreduce |
| D. | none of the above |
| Answer» B. reduce (to-one) | |
| 32. |
The easiest way to create communicators with new groups is with                      . |
| A. | mpi_comm_rank |
| B. | mpi_comm_create |
| C. | mpi_comm_split |
| D. | mpi_comm_group |
| Answer» D. mpi_comm_group | |
| 33. |
them in this case and returning the result to a single process. |
| A. | mpi _ reduce |
| B. | mpi_ bcast |
| C. | mpi_ finalize |
| D. | mpi_ comm size |
| Answer» B. mpi_ bcast | |
| 34. |
selectively screen messages. |
| A. | dest |
| B. | type |
| C. | address |
| D. | length |
| Answer» C. address | |
| 35. |
CS8083 Multi Core Architecture and Programming CSE - Regulations 2017 |
| A. | scatter |
| B. | gather |
| C. | broadcast |
| D. | allgather |
| Answer» D. allgather | |
| 36. |
                             generate log files of MPI calls. |
| A. | mpicxx |
| B. | mpilog |
| C. | mpitrace |
| D. | mpianim |
| Answer» C. mpitrace | |
| 37. |
parallel architectures affect parallelization? |
| A. | . performance |
| B. | . latency |
| C. | . bandwidth |
| D. | . accuracy |
| Answer» D. . accuracy | |
| 38. |
global_count += 5; |
| A. | . 4 instructions |
| B. | . 3 instructions |
| C. | . 5 instructions |
| D. | . 2 instructions |
| Answer» D. . 2 instructions | |
| 39. |
The size of the initial chunksize                       . |
| A. | . total_no_of_iterations / max_threads |
| B. | . total_no_of_remaining_iterations / max_threads |
| C. | . total_no_of_iterations / no_threads |
| D. | . total_no_of_remaining_iterations / no_threads |
| Answer» B. . total_no_of_remaining_iterations / max_threads | |
| 40. |
active in the parallel section region. |
| A. | . omp_get_num_procs ( ) |
| B. | . omp_get_num_threads ( ) |
| C. | . omp_get_thread_num ( ) |
| D. | . omp_set_num_threads ( ) |
| Answer» C. . omp_get_thread_num ( ) | |
| 41. |
The                         specifies that the iterations of the for loop should be executed in parallel by multiple threads. |
| A. | . sections construct |
| B. | . for pragma |
| C. | . single construct |
| D. | . parallel for construct |
| Answer» C. . single construct | |
| 42. |
code of functions that are called (directly or indirectly) from within the parallel region. |
| A. | . lexical extent |
| B. | . static extent |
| C. | . dynamic extent |
| D. | . none of the above |
| Answer» D. . none of the above | |
| 43. |
they would be in serial program. |
| A. | . nowait |
| B. | . ordered |
| C. | . collapse |
| D. | . for loops |
| Answer» C. . collapse | |
| 44. |
Here, w1 and w2 have shared variables, which are initialized to false. Which one of the following statements is TRUE about the above construct? |
| A. | it does not ensure mutual exclusion |
| B. | it does not ensure bounded waiting |
| C. | it requires that processes enter the critical section in strict alternation |
| D. | it does not prevent deadlocks but ensures mutual exclusion |
| Answer» E. | |
| 45. |
program, i.e., each thread executes the same code. |
| A. | . parallel |
| B. | . section |
| C. | . single |
| D. | . master |
| Answer» B. . section | |
| 46. |
(not necessarily immediately)? |
| A. | . #pragma omp section |
| B. | . #pragma omp parallel |
| C. | . none |
| D. | . #pragma omp master |
| Answer» B. . #pragma omp parallel | |
| 47. |
When compiling an OpenMP program with gcc, what flag must be included? |
| A. | . -fopenmp |
| B. | . #pragma omp parallel |
| C. | . –o hello |
| D. | . ./openmp |
| Answer» B. . #pragma omp parallel | |
| 48. |
The signal operation of the semaphore basically works on the basic             system call. |
| A. | continue() |
| B. | wakeup() |
| C. | getup() |
| D. | start() |
| Answer» C. getup() | |
| 49. |
at any moment (the mutex being initialized to 1)? |
| A. | 1 |
| B. | 2 |
| C. | 3 |
| D. | none of the mentioned |
| Answer» D. none of the mentioned | |
| 50. |
What is the main disadvantage of spinlocks? |
| A. | they are not sufficient for many process |
| B. | they require busy waiting |
| C. | they are unreliable sometimes |
| D. | they are too complex for programmers |
| Answer» C. they are unreliable sometimes | |