MCQOPTIONS
Saved Bookmarks
This section includes 62 Mcqs, each offering curated multiple-choice questions to sharpen your Computer Science Engineering (CSE) knowledge and support exam preparation. Choose a topic below to get started.
| 1. |
When number of switch ports is equal to or larger than number of devices, this simple network is referred to as ______________ |
| A. | Crossbar |
| B. | Crossbar switch |
| C. | Switching |
| D. | Both a and b |
| Answer» E. | |
| 2. |
Which MIMD systems are best scalable with respect to the number of processors? |
| A. | Distributed memory computers |
| B. | ccNUMA systems |
| C. | nccNUMA systems |
| D. | Symmetric multiprocessors |
| Answer» B. ccNUMA systems | |
| 3. |
Alternative way of a snooping-based coherence protocol, is called a ____________ |
| A. | Write invalidate protocol |
| B. | Snooping protocol |
| C. | Directory protocol |
| D. | Write update protocol |
| Answer» D. Write update protocol | |
| 4. |
Requesting node sending the requested data starting from the memory, and the requestor which has made the only sharing node, known as ________. |
| A. | Read miss |
| B. | Write miss |
| C. | Invalidate |
| D. | Fetch |
| Answer» B. Write miss | |
| 5. |
A processor performing fetch or decoding of different instruction during the execution of another instruction is called ______. |
| A. | Direct interconnects |
| B. | Indirect interconnects |
| C. | Pipe-lining |
| D. | Uniform Memory Access |
| Answer» D. Uniform Memory Access | |
| 6. |
MPI provides a function ________, for packing data into a buffer of contiguous memory. |
| A. | MPI_Pack |
| B. | MPI_UnPack |
| C. | MPI_Pack Count |
| D. | MPI_Packed |
| Answer» B. MPI_UnPack | |
| 7. |
Which of the following is not valid with reference to Message Passing Interface (MPI)? |
| A. | MPI can run on any hardware platform |
| B. | The programming model is a distributed memory model |
| C. | All parallelism is implicit |
| D. | MPI - Comm - Size returns the total number of MPI processes in specified communication |
| Answer» D. MPI - Comm - Size returns the total number of MPI processes in specified communication | |
| 8. |
Each node of the tree has an_________________ , that is, the cost of the partial tour. |
| A. | Euler s method |
| B. | associated cost |
| C. | three-dimensional problems |
| D. | fast function |
| Answer» B. associated cost | |
| 9. |
An _____________ is a program that finds the solution to an n-body problem by simulating the behavior of the particles. |
| A. | Two N-Body Solvers |
| B. | n-body solver |
| C. | n-body problem |
| D. | Newton s second law |
| Answer» C. n-body problem | |
| 10. |
Parallelizing the two n-body solvers using _______________ is very similar to parallelizing them using OpenMP. |
| A. | thread s rank |
| B. | function Loopschedule |
| C. | Pthreads |
| D. | loop variable |
| Answer» D. loop variable | |
| 11. |
The run-times of the serial solvers differed from the single-process MPI solvers by ______________. |
| A. | More than 1% |
| B. | less than 1% |
| C. | Equal to 1% |
| D. | Greater than 1% |
| Answer» C. Equal to 1% | |
| 12. |
The ____________________ is a pointer to a block of memory allocated by the user program and buffersize is its size in bytes. |
| A. | tour data |
| B. | node tasks |
| C. | actual computation |
| D. | buffer argument |
| Answer» C. actual computation | |
| 13. |
_____________ begins by checking on the number of tours that the process has in its stack. |
| A. | Terminated |
| B. | Send rejects |
| C. | Receive rejects |
| D. | Empty |
| Answer» B. Send rejects | |
| 14. |
For the reduced n-body solver, a ________________ will best distribute the workload in the computation of the forces. |
| A. | cyclic distribution |
| B. | velocity of each particle |
| C. | universal gravitation |
| D. | gravitational constant |
| Answer» B. velocity of each particle | |
| 15. |
________________ takes the data in data to be packed and packs it into contig_buf. |
| A. | MPI Unpack |
| B. | MPI_Pack |
| C. | MPI_Datatype |
| D. | MPI_Comm |
| Answer» C. MPI_Datatype | |
| 16. |
The _______________ function when executed by a process other than 0 sends its energy to process 0. |
| A. | Out of work |
| B. | No_work_left |
| C. | zero-length message |
| D. | request for work |
| Answer» B. No_work_left | |
| 17. |
The routine ________________ combines data from all processes by adding them in this case and returning the result to a single process. |
| A. | MPI _ Reduce |
| B. | MPI_ Bcast |
| C. | MPI_ Finalize |
| D. | MPI_ Comm size |
| Answer» B. MPI_ Bcast | |
| 18. |
The easiest way to create communicators with new groups is with_____________. |
| A. | MPI_Comm_rank |
| B. | MPI_Comm_create |
| C. | MPI_Comm_Split |
| D. | MPI_Comm_group |
| Answer» D. MPI_Comm_group | |
| 19. |
_______________ is an object that holds information about the received message, including, for example, it s actually count. |
| A. | buff |
| B. | count |
| C. | tag |
| D. | status |
| Answer» E. | |
| 20. |
__________________is the principal alternative to shared memory parallel programming. |
| A. | Multiple passing |
| B. | Message passing |
| C. | Message programming |
| D. | None of the above |
| Answer» C. Message programming | |
| 21. |
________________ returns in its second argument the number of processes in the communicator. |
| A. | MPI_Init |
| B. | MPI_Comm_size |
| C. | MPI_Finalize |
| D. | MPI_Comm_rank |
| Answer» C. MPI_Finalize | |
| 22. |
The _______________ operation similarly computes an element-wise reduction of vectors, but this time leaves the result scattered among the processes. |
| A. | Reduce-scatter |
| B. | Reduce (to-one) |
| C. | Allreduce |
| D. | None of the above |
| Answer» B. Reduce (to-one) | |
| 23. |
Communication functions that involve all the processes in a communicator are called ___________ |
| A. | MPI_Get_count |
| B. | collective communications |
| C. | buffer the message |
| D. | nonovertaking |
| Answer» C. buffer the message | |
| 24. |
________________may complete even if less than count elements have been received. |
| A. | MPI_Recv |
| B. | MPI_Send |
| C. | MPI_Get_count |
| D. | MPI_Any_Source |
| Answer» B. MPI_Send | |
| 25. |
A ___________ is a script whose main purpose is to run some program. In this case, the program is the C compiler. |
| A. | wrapper script |
| B. | communication functions |
| C. | wrapper simplifies |
| D. | type definitions |
| Answer» B. communication functions | |
| 26. |
Programs that can maintain a constant efficiency without increasing the problem size are sometimes said to be _______________. |
| A. | weakly scalable |
| B. | strongly scalable |
| C. | send_buf |
| D. | recv_buf |
| Answer» C. send_buf | |
| 27. |
The processes exchange partial results instead of using oneway communications. Such a communication pattern is sometimes called a ___________. |
| A. | butterfly |
| B. | broadcast |
| C. | Data Movement |
| D. | Synchronization |
| Answer» B. broadcast | |
| 28. |
MPI provides a function, ____________ that returns the number of seconds that have elapsed since some time in the past. |
| A. | MPI_Wtime |
| B. | MPI_Barrier |
| C. | MPI_Scatter |
| D. | MPI_Comm |
| Answer» B. MPI_Barrier | |
| 29. |
Parallelism can be used to increase the (parallel) size of the problem is applicable in ___________________. |
| A. | Amdahl's Law |
| B. | Gustafson-Barsis's Law |
| C. | Newton's Law |
| D. | Pascal's Law |
| Answer» C. Newton's Law | |
| 30. |
Synchronization is one of the common issues in parallel programming. The issues related to synchronization include the followings, EXCEPT: |
| A. | Deadlock |
| B. | Livelock |
| C. | Fairness |
| D. | Correctness |
| Answer» E. | |
| 31. |
Considering to use weak or strong scaling is part of ______________ in addressing the challenges of distributed memory programming. |
| A. | Splitting the problem |
| B. | Speeding up computations |
| C. | Speeding up communication |
| D. | Speeding up hardware |
| Answer» C. Speeding up communication | |
| 32. |
Which of the followings is the BEST description of Message Passing Interface (MPI)? |
| A. | A specification of a shared memory library |
| B. | MPI uses objects called communicators and groups to define which collection of processes may communicate with each other |
| C. | Only communicators and not groups are accessible to the programmer only by a "handle" |
| D. | A communicator is an ordered set of processes |
| Answer» C. Only communicators and not groups are accessible to the programmer only by a "handle" | |
| 33. |
A collective communication in which data belonging to a single process is sent to all of the processes in the communicator is called a ________________. |
| A. | Scatter |
| B. | Gather |
| C. | Broadcast |
| D. | Allgather |
| Answer» D. Allgather | |
| 34. |
_______________ specifies that the iteration of the loop must be executed as they would be in serial program. |
| A. | Nowait |
| B. | Ordered |
| C. | Collapse |
| D. | for loops |
| Answer» C. Collapse | |
| 35. |
___________________ initializes each private copy with the corresponding value from the master thread. |
| A. | Firstprivate |
| B. | lastprivate |
| C. | nowait |
| D. | Private (OpenMP) and reduction. |
| Answer» B. lastprivate | |
| 36. |
A ______________ construct by itself creates a single program multiple data program, i.e., each thread executes the same code. |
| A. | Parallel |
| B. | Section |
| C. | Single |
| D. | Master |
| Answer» B. Section | |
| 37. |
The __________________ of a parallel region extends the lexical extent by the code of functions that are called (directly or indirectly) from within the parallel region. |
| A. | Lexical extent |
| B. | Static extent |
| C. | Dynamic extent |
| D. | None of the above |
| Answer» D. None of the above | |
| 38. |
_______________ Function returns the number of threads that are currently active in the parallel section region. |
| A. | omp_get_num_procs ( ) |
| B. | omp_get_num_threads ( ) |
| C. | omp_get_thread_num ( ) |
| D. | omp_set_num_threads ( ) |
| Answer» C. omp_get_thread_num ( ) | |
| 39. |
The ______________ specifies that the iterations of the for loop should be executed in parallel by multiple threads. |
| A. | Sections construct |
| B. | for pragma |
| C. | Single construct |
| D. | Parallel for construct |
| Answer» C. Single construct | |
| 40. |
In OpenMP, the collection of threads executing the parallel block the original thread and the new thread is called a ____________ |
| A. | team |
| B. | executable code |
| C. | implicit task |
| D. | parallel constructs |
| Answer» B. executable code | |
| 41. |
When a thread reaches a _____________ directive, it creates a team of threads and becomes the master of the team. |
| A. | Synchronization |
| B. | Parallel |
| C. | Critical |
| D. | Single |
| Answer» C. Critical | |
| 42. |
The signal operation of the semaphore basically works on the basic _______ system call. |
| A. | continue() |
| B. | wakeup() |
| C. | getup() |
| D. | start() |
| Answer» C. getup() | |
| 43. |
Use the _________ library function to determine if nested parallel regions are enabled. |
| A. | Omp_target() |
| B. | Omp_declare target() |
| C. | Omp_target data() |
| D. | omp_get_nested() |
| Answer» E. | |
| 44. |
Which of the following conditions must be satisfied to solve the critical section problem? |
| A. | Mutual Exclusion |
| B. | Progress |
| C. | Bounded Waiting |
| D. | All of the mentioned |
| Answer» E. | |
| 45. |
____________ is a form of parallelization across multiple processors in parallel computing environments. |
| A. | Work-Sharing Constructs |
| B. | Data parallelism |
| C. | Functional Parallelism |
| D. | Handling loops |
| Answer» C. Functional Parallelism | |
| 46. |
A ___________ construct must be enclosed within a parallel region in order for the directive to execute in parallel. |
| A. | Parallel sections |
| B. | Critical |
| C. | Single |
| D. | work-sharing |
| Answer» E. | |
| 47. |
A minimum of _____ variable(s) is/are required to be shared between processes to solve the critical section problem. |
| A. | one |
| B. | two |
| C. | three |
| D. | four |
| Answer» C. three | |
| 48. |
The ____________is implemented more efficiently than a general parallel region containing possibly several loops. |
| A. | Sections |
| B. | Parallel Do/For |
| C. | Parallel sections |
| D. | Critical |
| Answer» C. Parallel sections | |
| 49. |
___________ are used for signaling among processes and can be readily used to enforce a mutual exclusion discipline. |
| A. | Semaphores |
| B. | Messages |
| C. | Monitors |
| D. | Addressing |
| Answer» B. Messages | |
| 50. |
To ensure difficulties do not arise in the readers writer s problem, _______ are given exclusive access to the shared object. |
| A. | readers |
| B. | writers |
| C. | readers and writers |
| D. | none of the above |
| Answer» C. readers and writers | |