Explore topic-wise MCQs in Cloud Computing.

This section includes 131 Mcqs, each offering curated multiple-choice questions to sharpen your Cloud Computing knowledge and support exam preparation. Choose a topic below to get started.

51.

_______________ Function returns the number of threads that are currentlyactive in the parallel section region.

A. omp_get_num_procs ( )
B. omp_get_num_threads ( )
C. omp_get_thread_num ( )
D. omp_set_num_threads ( )
Answer» C. omp_get_thread_num ( )
52.

When a thread reaches a _____________ directive, it creates a team of threadsand becomes the master of the team.

A. Synchronization
B. Parallel
C. Critical
D. Single
Answer» C. Critical
53.

Bounded waiting implies that there exists a bound on the number of times a process is allowed to enter its critical section ____________.

A. after a process has made a request to enter its critical section and before the request is granted
B. when another process is in its critical section
C. before a process has made a request to enter its critical section
D. none of the mentioned
Answer» B. when another process is in its critical section
54.

The newly created stack into our private stack, set the newstack variable to_____________.

A. Infinite
Answer» D.
55.

_______________ is an object that holds information about the received message,including, for example, it’s actually count.

A. buff
B. count
C. tag
D. status
Answer» E.
56.

Pthreads has a nonblocking version of pthreads_mutex_lock called__________

A. pthread_mutex_lock
B. pthread_mutex_trylock
C. pthread_mutex_acquirelock
D. pthread_mutex_releaselock
Answer» C. pthread_mutex_acquirelock
57.

The easiest way to create communicators with new groups iswith_____________.

A. MPI_Comm_rank
B. MPI_Comm_create
C. MPI_Comm_Split
D. MPI_Comm_group
Answer» D. MPI_Comm_group
58.

_____________ refers to the ability of multiple process (or threads) to share code, resources or data in such a way that only one process has access to shared object at a time.

A. Readers_writer locks
B. Barriers
C. Semaphores
D. Mutual Exclusion
Answer» E.
59.

Which of the following conditions must be satisfied to solve the criticalsection problem?

A. Mutual Exclusion
B. Progress
C. Bounded Waiting
D. All of the mentioned
Answer» E.
60.

Let S and Q be two semaphores initialized to 1, where P0 and P1 processes the following statements wait(S);wait(Q); ---; signal(S);signal(Q) and wait(Q); wait(S);---;signal(Q);signal(S); respectively. The above situation depicts a _________.

A. Livelock
B. Critical Section
C. Deadlock
D. Mutual Exclusion
Answer» D. Mutual Exclusion
61.

Two MPI_Irecv calls are made specifying different buffers and tags, but the same sender and request location. How can one determine that the buffer specified in the first call has valid data?

A. Call MPI_Probe
B. Call MPI_Testany with the same request listed twice
C. Call MPI_Wait twice with the same request
D. Look at the data in the buffer and try to determine whether it is
Answer» D. Look at the data in the buffer and try to determine whether it is
62.

In OpenMP, the collection of threads executing the parallel block theoriginal thread and the new thread is called a ____________

A. team
B. executable code
C. implicit task
D. parallel constructs
Answer» B. executable code
63.

The __________________ of a parallel region extends the lexical extent by thecode of functions that are called (directly or indirectly) from within the parallel region.

A. Lexical extent
B. Static extent
C. Dynamic extent
D. None of the above
Answer» D. None of the above
64.

The set of NP-complete problems is often denoted by ____________

A. NP-C
B. NP-C or NPC
C. NPC
D. None of the above
Answer» C. NPC
65.

In shared bus architecture, the required processor(s) to perform a bus cycle, for fetching data or instructions is ________________

A. One Processor
B. Two Processor
C. Multi-Processor
D. None of the above
Answer» B. Two Processor
66.

In OpenMP, assigning iterations to threads is called ________________

A. scheduling
B. Static
C. Dynamic
D. Guided
Answer» B. Static
67.

The _______________ operation similarly computes an element-wise reductionof vectors, but this time leaves the result scattered among the processes.

A. Reduce-scatter
B. Reduce (to-one)
C. Allreduce
D. None of the above
Answer» B. Reduce (to-one)
68.

Which cache miss does not occur in case of a fully associative cache?

A. Conflict miss
B. Capacity miss
C. Compulsory miss
D. Cold start miss
Answer» B. Capacity miss
69.

Parallelism can be used to increase the (parallel) size of the problemis applicable in ___________________.

A. Amdahl's Law
B. Gustafson-Barsis's Law
C. Newton's Law
D. Pascal's Law
Answer» C. Newton's Law
70.

When number of switch ports is equal to or larger than number of devices,this simple network is referred to as ______________

A. Crossbar
B. Crossbar switch
C. Switching
D. Both a and b
Answer» E.
71.

Using _____________ we can systematically visit each node of the tree that could possibly lead to a least-cost solution.

A. depth-first search
B. Foster‘s methodology
C. reduced algorithm
D. breadth first search
Answer» B. Foster‘s methodology
72.

A critical section is a program segment ______________.

A. where shared resources are accessed
B. single thread of execution
C. improves concurrency in multi-core system
D. Lower resource consumption
Answer» B. single thread of execution
73.

MPI provides a function, ____________ that returns the number of secondsthat have elapsed since some time in the past.

A. MPI_Wtime
B. MPI_Barrier
C. MPI_Scatter
D. MPI_Comm
Answer» B. MPI_Barrier
74.

If no node having a copy of a cache block, this technique is known as ______

A. Cached
B. Un-cached
C. Shared data
D. Valid data
Answer» C. Shared data
75.

____________ is the ability of multiple processes to co-ordinate their activitiesby exchange of information.

A. Deadlock
B. Synchronization
C. Mutual Exclusion
D. Cache
Answer» C. Mutual Exclusion
76.

The routine ________________ combines data from all processes by addingthem in this case and returning the result to a single process.

A. MPI _ Reduce
B. MPI_ Bcast
C. MPI_ Finalize
D. MPI_ Comm size
Answer» B. MPI_ Bcast
77.

_____________ always blocks until a matching message has been received.

A. MPI_TAG
B. MPI_ SOURCE
C. MPI Recv
D. MPI_ERROR
Answer» D. MPI_ERROR
78.

PC Program Counter is also called ____________

A. instruction pointer
B. memory pointer
C. data counter
D. file pointer
Answer» B. memory pointer
79.

_______________ causes no synchronization overhead and can maintain datalocality when data fits in cache.

A. Guided
B. Auto
C. Runtime
D. Static
Answer» E.
80.

Considering to use weak or strong scaling is part of ______________ inaddressing the challenges of distributed memory programming.

A. Splitting the problem
B. Speeding up computations
C. Speeding up communication
D. Speeding up hardware
Answer» C. Speeding up communication
81.

A collective communication in which data belonging to a single process issent to all of the processes in the communicator is called a ________________.

A. Scatter
B. Gather
C. Broadcast
D. Allgather
Answer» D. Allgather
82.

An _____________ is a program that finds the solution to an n-body problemby simulating the behavior of the particles.

A. Two N-Body Solvers
B. n-body solver
C. n-body problem
D. Newton‘s second law
Answer» C. n-body problem
83.

An n -body solver is a ___________ that finds 4 the solution to an n-body problem by simulating the behaviour of the particles

A. Program
B. Particle
C. Programmer
D. All of the above
Answer» B. Particle
84.

Cache memory works on the principle of ____________

A. communication links
B. Locality of reference
C. Bisection bandwidth
D. average access time
Answer» C. Bisection bandwidth
85.

Cache coherence: For which shared (virtual) memory systems is the snooping protocol suited?

A. Crossbar connected systems
B. Systems with hypercube network
C. Systems with butterfly network
D. Bus based systems
Answer» E.
86.

Mutual exclusion implies that ____________.

A. if a process is executing in its critical section, then no other process must be executing in their critical sections
B. if a process is executing in its critical section, then other processes must be executing in their critical sections
C. if a process is executing in its critical section, then all the resources of the system must be blocked until it finishes execution
D. none of the mentioned
Answer» B. if a process is executing in its critical section, then other processes must be executing in their critical sections
87.

________________ returns in its second argument the number of processes inthe communicator.

A. MPI_Init
B. MPI_Comm_size
C. MPI_Finalize
D. MPI_Comm_rank
Answer» C. MPI_Finalize
88.

Within a parallel region, declared variables are by default ________ .

A. Private
B. Local
C. Loco
D. Shared
Answer» E.
89.

Paths that have an unbounded number of allowed nonminimal hops frompacket sources, this situation is referred to as __________.

A. Livelock
B. Deadlock
C. Synchronization
D. Mutual Exclusion
Answer» B. Deadlock
90.

What are the two atomic operations permissible on semaphores?

A. Wait
B. Stop
C. Hold
D. none of the mentioned
Answer» B. Stop
91.

Which of the followings is the BEST description of Message PassingInterface (MPI)?

A. A specification of a shared memory library
B. MPI uses objects called communicators and groups to define which collection of processes may communicate with each other
C. Only communicators and not groups are accessible to the programmer only by a "handle"
D. A communicator is an ordered set of processes
Answer» C. Only communicators and not groups are accessible to the programmer only by a "handle"
92.

Requesting node sending the requested data starting from the memory,and the requestor which has made the only sharing node, known as ________.

A. Read miss
B. Write miss
C. Invalidate
D. Fetch
Answer» B. Write miss
93.

________________may complete even if less than count elements have beenreceived.

A. MPI_Recv
B. MPI_Send
C. MPI_Get_count
D. MPI_Any_Source
Answer» B. MPI_Send
94.

The run-times of the serial solvers differed from the single-process MPIsolvers by ______________.

A. More than 1%
B. less than 1%
C. Equal to 1%
D. Greater than 1%
Answer» C. Equal to 1%
95.

____________ is a form of parallelization across multiple processors in parallelcomputing environments.

A. Work-Sharing Constructs
B. Data parallelism
C. Functional Parallelism
D. Handling loops
Answer» C. Functional Parallelism
96.

The concept of pipelining is most effective in improving performance if thetasks being performed in different stages :

A. require different amount of time
B. require about the same amount of time
C. require different amount of time with time difference between any two tasks being same
D. require different amount with time difference between any two tasks being different
Answer» C. require different amount of time with time difference between any two tasks being same
97.

The size of the initial chunksize _____________.

A. total_no_of_iterations / max_threads
B. total_no_of_remaining_iterations / max_threads
C. total_no_of_iterations / No_threads
D. total_no_of_remaining_iterations / No_threads
Answer» B. total_no_of_remaining_iterations / max_threads
98.

MPI_Send and MPI_Recv are called _____________ communications.

A. Collective Communication
B. Tree-Structured Communication
C. point-to-point
D. Collective Computation
Answer» D. Collective Computation
99.

Programs that can maintain a constant efficiency without increasing theproblem size are sometimes said to be _______________.

A. weakly scalable
B. strongly scalable
C. send_buf
D. recv_buf
Answer» C. send_buf
100.

Which MIMD systems are best scalable with respect to the number ofprocessors?

A. Distributed memory computers
B. ccNUMA systems
C. nccNUMA systems
D. Symmetric multiprocessors
Answer» B. ccNUMA systems