MCQOPTIONS
Saved Bookmarks
This section includes 445 Mcqs, each offering curated multiple-choice questions to sharpen your Computer Science Engineering (CSE) knowledge and support exam preparation. Choose a topic below to get started.
| 251. |
In CUDA memory model there are following memory types available: a) Registers; b) Local Memory; c) Shared Memory; d) Global Memory; e) Constant Memory; f) Texture Memory. |
| A. | a, b, d, f |
| B. | a, c, d, e, f |
| C. | a, b, c, d, e, f |
| D. | b, c, e, f |
| Answer» D. b, c, e, f | |
| 252. |
CUDA Hardware programming model supports: a) fully generally data-parallel archtecture; b) General thread launch; c) Global load-store; d) Parallel data cache; e) Scalar architecture; f) Integers, bit operation |
| A. | a,c,d,f |
| B. | b,c,d,e |
| C. | a,d,e,f |
| D. | a,b,c,d,e,f |
| Answer» E. | |
| 253. |
IADD, IMUL24, IMAD24, IMIN, IMAX are ----------- supported by Scalar Processors of NVIDIA GPU. |
| A. | 32-bit ieee floating point instructions |
| B. | 32-bit integer instructions |
| C. | both |
| D. | none of the above |
| Answer» C. both | |
| 254. |
NVIDIA 8-series GPUs offer -------- . |
| A. | 50-200 gflops |
| B. | 200-400 gflops |
| C. | 400-800 gflops |
| D. | 800-1000 gflops |
| Answer» B. 200-400 gflops | |
| 255. |
The NVIDIA G80 is a ---- CUDA core device, the NVIDIA G200 is a ---- CUDA core device, and the NVIDIA Fermi is a ---- CUDA core device. |
| A. | 128, 256, 512 |
| B. | 32, 64, 128 |
| C. | 64, 128, 256 |
| D. | 256, 512, 1024 |
| Answer» B. 32, 64, 128 | |
| 256. |
The host processor spawns multithread tasks (or kernels as they are known in CUDA) onto the GPU device. State true or false. |
| A. | true |
| B. | false |
| C. | --- |
| D. | --- |
| Answer» B. false | |
| 257. |
CUDA stands for --------, designed by NVIDIA. |
| A. | common union discrete architecture |
| B. | complex unidentified device architecture |
| C. | compute unified device architecture |
| D. | complex unstructured distributed architecture |
| Answer» D. complex unstructured distributed architecture | |
| 258. |
The CUDA architecture consists of --------- for parallel computing kernels and functions. |
| A. | risc instruction set architecture |
| B. | cisc instruction set architecture |
| C. | zisc instruction set architecture |
| D. | ptx instruction set architecture |
| Answer» E. | |
| 259. |
_______ became the first language specifically designed by a GPU Company to facilitate general purpose computing on ____. |
| A. | python, gpus. |
| B. | c, cpus. |
| C. | cuda c, gpus. |
| D. | java, cpus. |
| Answer» D. java, cpus. | |
| 260. |
What is Unified Virtual Machine |
| A. | it is a technique that allow both cpu and gpu to read from single virtual machine, simultaneously. |
| B. | it is a technique for managing separate host and device memory spaces. |
| C. | it is a technique for executing device code on host and host code on device. |
| D. | it is a technique for executing general purpose programs on device instead of host. |
| Answer» B. it is a technique for managing separate host and device memory spaces. | |
| 261. |
Limitations of CUDA Kernel |
| A. | recursion, call stack, static variable declaration |
| B. | no recursion, no call stack, no static variable declarations |
| C. | recursion, no call stack, static variable declaration |
| D. | no recursion, call stack, no static variable declarations |
| Answer» C. recursion, no call stack, static variable declaration | |
| 262. |
Each warp of GPU receives a single instruction and “broadcasts†it to all of its threads. It is a ---- operation. |
| A. | simd (single instruction multiple data) |
| B. | simt (single instruction multiple thread) |
| C. | sisd (single instruction single data) |
| D. | sist (single instruction single thread) |
| Answer» C. sisd (single instruction single data) | |
| 263. |
CUDA provides ------- warp and thread scheduling. Also, the overhead of thread creation is on the order of ----. |
| A. | “programming-overheadâ€, 2 clock |
| B. | “zero-overheadâ€, 1 clock |
| C. | 64, 2 clock |
| D. | 32, 1 clock |
| Answer» C. 64, 2 clock | |
| 264. |
Each NVIDIA GPU has ------ Streaming Multiprocessors |
| A. | 8 |
| B. | 1024 |
| C. | 512 |
| D. | 16 |
| Answer» E. | |
| 265. |
Each streaming multiprocessor (SM) of CUDA herdware has ------ scalar processors (SP). |
| A. | 1024 |
| B. | 128 |
| C. | 512 |
| D. | 8 |
| Answer» E. | |
| 266. |
FADD, FMAD, FMIN, FMAX are ----- supported by Scalar Processors of NVIDIA GPU. |
| A. | 32-bit ieee floating point instructions |
| B. | 32-bit integer instructions |
| C. | both |
| D. | none of the above |
| Answer» B. 32-bit integer instructions | |
| 267. |
CUDA supports programming in .... |
| A. | c or c++ only |
| B. | java, python, and more |
| C. | c, c++, third party wrappers for java, python, and more |
| D. | pascal |
| Answer» D. pascal | |
| 268. |
Out-of-order instructions is not possible on GPUs. |
| A. | true |
| B. | false |
| C. | -- |
| D. | -- |
| Answer» C. -- | |
| 269. |
NVIDIA CUDA Warp is made up of how many threads? |
| A. | 512 |
| B. | 1024 |
| C. | 312 |
| D. | 32 |
| Answer» E. | |
| 270. |
The computer cluster architecture emerged as an alternative for ____. |
| A. | isa |
| B. | workstation |
| C. | super computers |
| D. | distributed systems |
| Answer» D. distributed systems | |
| 271. |
_____ method is used in centralized systems to perform out of order execution. |
| A. | scorecard |
| B. | score boarding |
| C. | optimizing |
| D. | redundancy |
| Answer» C. optimizing | |
| 272. |
The time lost due to branch instruction is often referred to as _____. |
| A. | latency |
| B. | delay |
| C. | branch penalty |
| D. | none of the above |
| Answer» D. none of the above | |
| 273. |
Any condition that causes a processor to stall is called as _____. |
| A. | hazard |
| B. | page fault |
| C. | system error |
| D. | none of the above |
| Answer» B. page fault | |
| 274. |
Host codes in a CUDA application can not Deallocate memory on the GPU |
| A. | true |
| B. | false |
| Answer» C. | |
| 275. |
Host codes in a CUDA application can Transfer data to and from the device |
| A. | true |
| B. | false |
| Answer» B. false | |
| 276. |
Host codes in a CUDA application can not Reset a device |
| A. | true |
| B. | false |
| Answer» C. | |
| 277. |
a solution of the problem in representing the parallelismin algorithm is |
| A. | cud |
| B. | pta |
| C. | cda |
| D. | cuda |
| Answer» E. | |
| 278. |
A block is comprised of multiple _______. |
| A. | treads |
| B. | bunch |
| C. | host |
| D. | none of above |
| Answer» B. bunch | |
| 279. |
A grid is comprised of ________ of threads. |
| A. | block |
| B. | bunch |
| C. | host |
| D. | none of above |
| Answer» B. bunch | |
| 280. |
In CUDA, a single invoked kernel is referred to as a _____. |
| A. | block |
| B. | tread |
| C. | grid |
| D. | none of above |
| Answer» D. none of above | |
| 281. |
CUDA supports ____________ in which code in a single thread is executed by all other threads. |
| A. | tread division |
| B. | tread termination |
| C. | thread abstraction |
| D. | none of above |
| Answer» D. none of above | |
| 282. |
CUDA offers the Chevron Syntax to configure and execute a kernel. |
| A. | true |
| B. | false |
| Answer» B. false | |
| 283. |
Host codes in a CUDA application can not Invoke kernels |
| A. | true |
| B. | false |
| Answer» C. | |
| 284. |
Host codes in a CUDA application can Allocate GPU memory |
| A. | true |
| B. | false |
| Answer» B. false | |
| 285. |
Host codes in a CUDA application can Initialize a device |
| A. | true |
| B. | false |
| Answer» B. false | |
| 286. |
Calling a kernel is typically referred to as _________. |
| A. | kernel thread |
| B. | kernel initialization |
| C. | kernel termination |
| D. | kernel invocation |
| Answer» E. | |
| 287. |
The kernel code is executable on the device and host |
| A. | true |
| B. | false |
| Answer» C. | |
| 288. |
The kernel code is only callable by the host |
| A. | true |
| B. | false |
| Answer» B. false | |
| 289. |
The important feature of the VLIW is ______? |
| A. | ilp |
| B. | performance |
| C. | cost effectiveness |
| D. | delay |
| Answer» B. performance | |
| 290. |
Which are the performance metrics for parallel systems? |
| A. | execution time |
| B. | total parallel overhead |
| C. | speedup |
| D. | all above |
| Answer» E. | |
| 291. |
What are the sources of overhead? |
| A. | essential /excess computation |
| B. | inter-process communication |
| C. | idling |
| D. | all above |
| Answer» E. | |
| 292. |
The high-throughput service provided is measures taken by |
| A. | flexibility |
| B. | efficiency |
| C. | dependability |
| D. | adaptation |
| Answer» E. | |
| 293. |
Which of the following is an primary goal of HTC paradigm___________? |
| A. | high ratio identification |
| B. | low-flux computing |
| C. | high-flux computing |
| D. | computer utilities |
| Answer» D. computer utilities | |
| 294. |
Data centers and centralized computing covers many and? |
| A. | microcomputers |
| B. | minicomputers |
| C. | mainframe computers |
| D. | supercomputers |
| Answer» E. | |
| 295. |
Interprocessor communication that takes place? |
| A. | centralized memory |
| B. | shared memory |
| C. | message passing |
| D. | both a and b |
| Answer» E. | |
| 296. |
The development generations of Computer technology has gone through? |
| A. | 6 |
| B. | 3 |
| C. | 4 |
| D. | 5 |
| Answer» E. | |
| 297. |
Utilization rate of resources in an execution model is known to be its? |
| A. | adaptation |
| B. | efficiency |
| C. | dependability |
| D. | flexibility |
| Answer» C. dependability | |
| 298. |
Even under failure conditions Providing Quality of Service (QoS) assurance is the responsibility of? |
| A. | dependability |
| B. | adaptation |
| C. | flexibility |
| D. | efficiency |
| Answer» B. adaptation | |
| 299. |
Peer-to-Peer leads to the development of technologies like? |
| A. | norming grids |
| B. | data grids |
| C. | computational grids |
| D. | both a and b |
| Answer» E. | |
| 300. |
Aberration of HPC? |
| A. | high-peak computing |
| B. | high-peripheral computing |
| C. | high-performance computing |
| D. | highly-parallel computing |
| Answer» D. highly-parallel computing | |