Explore topic-wise MCQs in Technical Programming.

This section includes 230 Mcqs, each offering curated multiple-choice questions to sharpen your Technical Programming knowledge and support exam preparation. Choose a topic below to get started.

1.

Which of the following is a multi-threaded server using non-blocking I/O?

A. TNonblockingServer
B. TSimpleServer
C. TSocket
D. None of the mentioned
Answer» B. TSimpleServer
2.

How many algorithms mahout supports for clustering?

A. 1
B. 2
C. 3
D. 4
Answer» C. 3
3.

The Avros class also has a _____ method for creating PTypes for POJOs using Avro’s reflection-based serialization mechanism.

A. spot
B. reflects
C. gets
D. all of the mentioned
Answer» C. gets
4.

__________ represent the logical computations of your Crunch pipelines.

A. DoFns
B. DoFn
C. ThreeFns
D. None of the mentioned
Answer» B. DoFn
5.

Which of the following Features of Apache Spark?

A. Speed
B. Supports multiple languages
C. Advanced Analytics
D. All of the above
Answer» E.
6.

What is the full form of OEP?

A. Oozie Editor Plugin
B. Oozie Eclipse Plugin
C. Oozie Eclipse Partition
D. Oozie Editor Partition
Answer» C. Oozie Eclipse Partition
7.

Which of the following is incorrect way for Spark deployment?

A. Standalone
B. Hadoop Yarn
C. Spark in MapReduce
D. Spark SQL
Answer» E.
8.

Inline DoFn that splits a line up into words is an inner class ____________

A. Pipeline
B. MyPipeline
C. ReadPipeline
D. WritePipe
Answer» C. ReadPipeline
9.

Hive, Pig, and Cascading all use a _________ data model.

A. value centric
B. columnar
C. tuple-centric
D. none of the mentioned
Answer» D. none of the mentioned
10.

How many classification does Naive Bayes Classifier have?

A. 1
B. 2
C. 3
D. 4
Answer» C. 3
11.

Which of the following format is similar to TCompactProtocol?

A. TCompactProtocol
B. TDenseProtocol
C. TBinaryProtocol
D. TSimpleJSONProtocol
Answer» C. TBinaryProtocol
12.

When was Apache Spark developed ?

A. 2007
B. 2008
C. 2009
D. 2010
Answer» D. 2010
13.

_____________ transport writes to a file.

A. TNonblockingServer
B. TFileTransport
C. TFramedTransport
D. TMemoryTransport
Answer» C. TFramedTransport
14.

_____________ is a human-readable text format to aid in debugging.

A. TMemory
B. TDebugProtocol
C. TBinaryProtocol
D. TSimpleJSONProtocol
Answer» C. TBinaryProtocol
15.

Spark is best suited for ______ data.

A. Real-time
B. Virtual
C. Structured
D. All of the above
Answer» B. Virtual
16.

The ______________ class defines a configuration parameter named LINES_PER_MAP that controls how the input file is split.

A. NLineInputFormat
B. InputLineFormat
C. LineInputFormat
D. None of the mentioned
Answer» B. InputLineFormat
17.

All file access uses Java's __________ APIs which give Lucene stronger index safety.

A. NIO.1
B. NIO.2
C. NIO.3
D. NIO.4
Answer» C. NIO.3
18.

___________ executes the pipeline as a series of MapReduce jobs.

A. SparkPipeline
B. MRPipeline
C. MemPipeline
D. None of the mentioned
Answer» C. MemPipeline
19.

__________ uses memory for I/O in Thrift.

A. TZlibTransport
B. TFramedTransport
C. TMemoryTransport
D. None of the mentioned
Answer» D. None of the mentioned
20.

The items stored on _______ are organized in a hierarchy of widget category.

A. HICE
B. HICC
C. HIEC
D. All of the mentioned
Answer» C. HIEC
21.

__________ is an abstraction over Apache Hadoop YARN that reduces the complexity of developing distributed applications.

A. Wave
B. Twill
C. Usergrid
D. None of the mentioned
Answer» C. Usergrid
22.

Which of the following project will create an SOA services framework?

A. DeltaCloud
B. CXF
C. DeltaSpike
D. None of the mentioned
Answer» C. DeltaSpike
23.

Which of the following tool is intended to be more compatible with HDT?

A. Git
B. Juno
C. Indigo
D. None of the mentioned
Answer» D. None of the mentioned
24.

Which of the following has the core Eclipse PDE tools for HDT development?

A. RVP
B. RAP
C. RBP
D. RVP
Answer» C. RBP
25.

Data analytics scripts are written in ____________

A. Hive
B. CQL
C. PigLatin
D. Java
Answer» D. Java
26.

Apache Knox accesses Hadoop Cluster over _________

A. HTTP
B. TCP
C. ICMP
D. None of the mentioned
Answer» B. TCP
27.

A ________ is a way of extending Ambari that allows 3rd parties to plug in new resource types along with the APIs.

A. trigger
B. view
C. schema
D. none of the mentioned
Answer» C. schema
28.

HICC, the Chukwa visualization interface, requires HBase version _____________

A. 0.90.5+.
B. 0.10.4+.
C. 0.90.4+.
D. None of the mentioned
Answer» D. None of the mentioned
29.

The easiest way to have an HDP cluster is to download the _____________

A. Hadoop
B. Sandbox
C. Dashboard
D. None of the mentioned
Answer» C. Dashboard
30.

Apache Hadoop Development Tools is an effort undergoing incubation at _________

A. ADF
B. ASF
C. HCC
D. AFS
Answer» C. HCC
31.

HDT is used for listing running Jobs on __________ Cluster.

A. MR
B. Hive
C. Pig
D. None of the mentioned
Answer» B. Hive
32.

HDT provides wizards for creating Java Classes for ___________

A. Mapper
B. Reducer
C. Driver
D. All of the mentioned
Answer» E.
33.

_____________ is a software distribution framework based on OSGi.

A. ACE
B. Abdera
C. Zeppelin
D. Accumulo
Answer» B. Abdera
34.

Ambari leverages ___________ for system alerting and will send emails when your attention is needed.

A. Nagios
B. Nagaond
C. Ganglia
D. All of the mentioned
Answer» B. Nagaond
35.

Which of the following provides extendible modern and functional API leveraging SE, ME and EE environments?

A. Sirona
B. Taverna
C. Tamaya
D. Streams
Answer» D. Streams
36.

Which of the following is a monitoring solution for hadoop?

A. Sirona
B. Sentry
C. Slider
D. Streams
Answer» B. Sentry
37.

If demux is successful within ____________ attempts, archives the completed files in Chukwa.

A. one
B. two
C. three
D. all of the mentioned
Answer» D. all of the mentioned
38.

Ambari provides a ________ API that enables integration with existing tools, such as Microsoft System Center.

A. RestLess
B. Web Service
C. RESTful
D. None of the mentioned
Answer» D. None of the mentioned
39.

Collectors write chunks to logs/*.chukwa files until a __________ MB chunk is reached.

A. 64
B. 108
C. 256
D. 1024
Answer» B. 108
40.

By default, collector’s listen on port _________

A. 8008
B. 8070
C. 8080
D. None of the mentioned
Answer» D. None of the mentioned
41.

REEF is a scale-out computing fabric that eases the development of Big Data applications are ___________

A. MRQL
B. NiFi
C. REEF
D. Ripple
Answer» D. Ripple
42.

________ includes a flexible and powerful toolkit for displaying monitoring and analyzing results.

A. Imphala
B. Chukwa
C. BigTop
D. Oozie
Answer» C. BigTop
43.

_____________ is an IaaS (“Infrastracture as a Service”) cloud orchestration platform.

A. CloudStack
B. Cazerra
C. Click
D. All of the mentioned
Answer» B. Cazerra
44.

Apache __________ is a platform for building native mobile applications using HTML, CSS and JavaScript (formerly Phonegap).

A. Cazerra
B. Cordova
C. CouchDB
D. All of the mentioned
Answer» C. CouchDB
45.

Which of the following are a collaborative data analytics and visualization tool?

A. ACE
B. Abdera
C. Zeppelin
D. Accumulo
Answer» D. Accumulo
46.

PostingsFormat now uses a __________ API when writing postings, just like doc values.

A. push
B. pull
C. read
D. all of the mentioned
Answer» C. read
47.

Heap usage during IndexWriter merging is also much lower with the new _________

A. LucCodec
B. Lucene50Codec
C. Lucene20Cod
D. All of the mentioned
Answer» C. Lucene20Cod
48.

Spark powers a stack of high-level tools including Spark SQL, MLlib for _________

A. regression models
B. statistics
C. machine learning
D. reproductive research
Answer» D. reproductive research
49.

Apache Flume 1.3.0 is the fourth release under the auspices of Apache of the so-called ________ codeline.

A. NG
B. ND
C. NF
D. NR
Answer» B. ND
50.

Spark is engineered from the bottom-up for performance, running ___________ faster than Hadoop by exploiting in memory computing and other optimizations.

A. 100x
B. 150x
C. 200x
D. None of the mentioned
Answer» B. 150x