Explore topic-wise MCQs in Technical Programming.

This section includes 230 Mcqs, each offering curated multiple-choice questions to sharpen your Technical Programming knowledge and support exam preparation. Choose a topic below to get started.

201.

 HDFS and NoSQL file systems focus almost exclusively on adding nodes to :

A. Scale out
B. Scale up
C. Both Scale out and up
D. None of the mentioned
Answer» B. Scale up
202.

Which of the following phases occur simultaneously ?

A. Shuffle and Sort
B. Reduce and Sort
C. Shuffle and Map
D. All of the mentioned
Answer» B. Reduce and Sort
203.

The output of the _______ is not sorted in the Mapreduce framework for Hadoop.

A. Mapper
B. Cascader
C. Scalding
D. None of the mentioned
Answer» E.
204.

The ________ class provides the getValue() method to read the values from its instance.

A. Get
B. Result
C. Put
D. Value
Answer» C. Put
205.

__________ class adds HBase configuration files to its object.

A. Configuration
B. Collector
C. Component
D. None of the mentioned
Answer» B. Collector
206.

The standard output (stdout) and error (stderr) streams of the task are read by the TaskTracker and logged to :

A. ${HADOOP_LOG_DIR}/user
B. ${HADOOP_LOG_DIR}/userlogs
C. ${HADOOP_LOG_DIR}/logs
D. None of the mentioned
Answer» C. ${HADOOP_LOG_DIR}/logs
207.

During the execution of a streaming job, the names of the _______ parameters are transformed.

A. vmap
B. mapvim
C. mapreduce
D. mapred
Answer» E.
208.

The ___________ executes the Mapper/ Reducer task as a child process in a separate jvm.

A. JobTracker
B. TaskTracker
C. TaskScheduler
D. None of the mentioned
Answer» B. TaskTracker
209.

__________ is the primary interface for a user to describe a MapReduce job to the Hadoop framework for execution.

A. JobConfig
B. JobConf
C. JobConfiguration
D. All of the mentioned
Answer» C. JobConfiguration
210.

The right level of parallelism for maps seems to be around _________ maps per-node

A. 1-10
B. 10-100
C. 100-150
D. 150-200
Answer» C. 100-150
211.

Applications can use the ____________ to report progress and set application-level status messages

A. Partitioner
B. OutputSplit
C. Reporter
D. All of the mentioned
Answer» D. All of the mentioned
212.

Reducer is input the grouped output of a :

A. Mapper
B. Reducer
C. Writable
D. Readable
Answer» B. Reducer
213.

Interface ____________ reduces a set of intermediate values which share a key to a smaller set of values.

A. Mapper
B. Reducer
C. Writable
D. Readable
Answer» C. Writable
214.

________ is the slave/worker node and holds the user data in the form of Data Blocks.

A. DataNode
B. NameNode
C. Data block
D. Replication
Answer» B. NameNode
215.

The need for data replication can arise in various scenarios like :

A. Replication Factor is changed
B. DataNode goes down
C. Data Blocks get corrupted
D. All of the mentioned
Answer» E.
216.

Which of the following scenario may not be a good fit for HDFS ?

A. HDFS is not suitable for scenarios requiring multiple/simultaneous writes to the same file
B. HDFS is suitable for storing data related to applications requiring low latency data access
C. HDFS is suitable for storing data related to applications requiring low latency data access
D. None of the mentioned
Answer» B. HDFS is suitable for storing data related to applications requiring low latency data access
217.

_________ identifies filesystem pathnames which work as usual with regular expressions.

A. archiveName
B. source
C. destination
D. None of the mentioned
Answer» E.
218.

Which of the following parameter describes destination directory which would contain the archive ?

A. archiveName
B. source
C. destination
D. None of the mentioned
Answer» D. None of the mentioned
219.

 _________ is a pluggable Map/Reduce scheduler for Hadoop which provides a way to share large clusters.

A. Flow Scheduler
B. Data Scheduler
C. Capacity Scheduler
D. None of the mentioned
Answer» D. None of the mentioned
220.

On a tasktracker, the map task passes the split to the createRecordReader() method on InputFormat to obtain a _________ for that split.

A. InputReader
B. RecordReader
C. OutputReader
D. None of the mentioned
Answer» C. OutputReader
221.

InputFormat class calls the ________ function and computes splits for each file and then sends them to the jobtracker.

A. puts
B. gets
C. getSplits
D. All of the mentioned
Answer» D. All of the mentioned
222.

Hadoop achieves reliability by replicating the data across multiple hosts, and hence does not require ________ storage on hosts.

A. RAID
B. Standard RAID levels
C. ZFS
D. Operating system
Answer» B. Standard RAID levels
223.

Which of the following platforms does Hadoop run on ?

A. Bare metal
B. Debian
C. Cross-platform
D. Unix-like
Answer» D. Unix-like
224.

What was Hadoop written in ?

A. Java (software platform)
B. Perl
C. Java (programming language)
D. Lua (programming language)
Answer» D. Lua (programming language)
225.

Which of the following genres does Hadoop produce ?

A. Distributed file system
B. JAX-RS
C. Java Message Service
D. Relational Database Management System
Answer» B. JAX-RS
226.

___________ is general-purpose computing model and runtime system for distributed data analytics.

A. Mapreduce
B. Drill
C. Oozie
D. None of the mentioned
Answer» B. Drill
227.

________ is the most popular high-level Java API in Hadoop Ecosystem

A. Scalding
B. HCatalog
C. Cascalog
D. Cascading
Answer» E.
228.

 All of the following accurately describe Hadoop, EXCEPT:

A. Open source
B. Real-time
C. Java-based
D. Distributed computing approach
Answer» C. Java-based
229.

What was Hadoop named after?

A. Creator Doug Cutting’s favorite circus act
B. Cutting’s high school rock band
C. The toy elephant of Cutting’s son
D. A sound Cutting’s laptop made during Hadoop’s development
Answer» D. A sound Cutting’s laptop made during Hadoop’s development
230.

Point out the wrong statement :

A. Hardtop’s processing capabilities are huge and its real advantage lies in the ability to process terabytes & petabytes of data
B. Hadoop uses a programming model called “MapReduce”, all the programs should confirms to this model in order to work on Hadoop platform
C. The programming model, MapReduce, used by Hadoop is difficult to write and test
D. All of the mentioned
Answer» D. All of the mentioned