

MCQOPTIONS
Saved Bookmarks
This section includes 230 Mcqs, each offering curated multiple-choice questions to sharpen your Technical Programming knowledge and support exam preparation. Choose a topic below to get started.
101. |
Drill also provides intuitive extensions to SQL to work with _______ data types. |
A. | simple |
B. | nested |
C. | int |
D. | all of the mentioned |
Answer» C. int | |
102. |
Which of the following is a straightforward binary format? |
A. | TCompactProtocol |
B. | TDenseProtocol |
C. | TBinaryProtocol |
D. | TSimpleJSONProtocol |
Answer» D. TSimpleJSONProtocol | |
103. |
A Flume agent is a JVMprocess which has? |
A. | 3 components |
B. | 4 components |
C. | 5 components |
D. | 6 components |
Answer» B. 4 components | |
104. |
A float parameter, defaults to 0.0001f, which means we can deal with 1 error every __________ rows. |
A. | 1000 |
B. | 10000 |
C. | 1 million rows |
D. | None of the mentioned |
Answer» C. 1 million rows | |
105. |
Flume Hadoop can also be used to transport event data including but not limited to network traffic data, data generated by social media websites and email messages. |
A. | True |
B. | False |
C. | May be True or False |
D. | Can't Say |
Answer» B. False | |
106. |
Which of the following Hive commands is not supported by HCatalog? |
A. | ALTER INDEX … REBUILD |
B. | CREATE VIEW |
C. | SHOW FUNCTIONS |
D. | DROP TABLE |
Answer» B. CREATE VIEW | |
107. |
________ uses blocking socket I/O for transport. |
A. | TNonblockingServer |
B. | TSimpleServer |
C. | TSocket |
D. | None of the mentioned |
Answer» D. None of the mentioned | |
108. |
__________ is a REST API for HCatalog. |
A. | WebHCat |
B. | WbHCat |
C. | InpHCat |
D. | None of the mentioned |
Answer» B. WbHCat | |
109. |
The output descriptor for the table to be written is created by calling ____________ |
A. | OutputJobInfo.describe |
B. | OutputJobInfo.create |
C. | OutputJobInfo.put |
D. | None of the mentioned |
Answer» C. OutputJobInfo.put | |
110. |
____________ is used with Pig scripts to write data to HCatalog-managed tables. |
A. | HamaStorer |
B. | HCatStam |
C. | HCatStorer |
D. | All of the mentioned |
Answer» D. All of the mentioned | |
111. |
Lucene provides scalable, high-Performance indexing over ______ per hour on modern hardware. |
A. | 1GB |
B. | 150GB |
C. | 1TB |
D. | 150TB |
Answer» C. 1TB | |
112. |
Mahout provides an implementation of a ______________ identification algorithm which scores collocations using log-likelihood ratio. |
A. | collocation |
B. | compaction |
C. | collection |
D. | none of the mentioned |
Answer» B. compaction | |
113. |
In Ambari User Views, Which user view helps you understand and optimize your cluster resource usage? |
A. | Hive |
B. | Pig |
C. | Tez |
D. | Capacity Scheduler |
Answer» D. Capacity Scheduler | |
114. |
Flume Big data has different levels of reliability to offer? |
A. | best-effort delivery |
B. | end-to-end delivery |
C. | both a and b |
D. | None of the above |
Answer» D. None of the above | |
115. |
How many types of modes are present in Hama? |
A. | 2 |
B. | 3 |
C. | 4 |
D. | 5 |
Answer» C. 4 | |
116. |
All file access uses Java’s __________ APIs which give Lucene stronger index safety. |
A. | NIO.2 |
B. | NIO.3 |
C. | NIO.4 |
D. | NIO.5 |
Answer» B. NIO.3 | |
117. |
Which of the following is incorrect job type in oozie? |
A. | Oozie Workflow Jobs |
B. | Oozie Coordinator Jobs |
C. | Oozie Notebundle jobs |
D. | All of the above |
Answer» D. All of the above | |
118. |
The HCatalog interface for Pig consists of ____________ and HCatStorer, which implement the Pig load and store interfaces respectively. |
A. | HCLoader |
B. | HCatLoader |
C. | HCatLoad |
D. | None of the mentioned |
Answer» C. HCatLoad | |
119. |
_______________ method is used to include a projection schema, to specify the output fields. |
A. | OutputSchema |
B. | setOut |
C. | setOutputSchema |
D. | none of the mentioned |
Answer» D. none of the mentioned | |
120. |
The top-level ___________ package contains three of the most important specializations in Crunch. |
A. | org.apache.scrunch |
B. | org.apache.crunch |
C. | org.apache.kcrunch |
D. | all of the mentioned |
Answer» C. org.apache.kcrunch | |
121. |
A ___________ is an application that is deployed into the Ambari container. |
A. | trigger |
B. | procedure |
C. | view |
D. | schema |
Answer» D. schema | |
122. |
For Scala users, there is the __________ API, which is built on top of the Java APIs. |
A. | Prunch |
B. | Scrunch |
C. | Hivench |
D. | All of the mentioned |
Answer» C. Hivench | |
123. |
HCatalog maintains a cache of _________ to talk to the metastore. |
A. | HiveServer |
B. | HiveClients |
C. | HCatClients |
D. | All of the mentioned |
Answer» C. HCatClients | |
124. |
___________ includes Apache Drill as part of the Hadoop distribution. |
A. | Impala |
B. | MapR |
C. | Oozie |
D. | All of the mentioned |
Answer» C. Oozie | |
125. |
__________ is a scalable distributed monitoring system for high-performance computing systems. |
A. | Nagios |
B. | Ganglia |
C. | Nagaond |
D. | All of the above |
Answer» C. Nagaond | |
126. |
Lucene provides scalable, high-Performance indexing over ______ per hour on modern hardware. |
A. | 1 TB |
B. | 150GB |
C. | 10 GB |
D. | None of the mentioned |
Answer» C. 10 GB | |
127. |
The first call on the HCatOutputFormat must be ____________ |
A. | setOutputSchema |
B. | setOutput |
C. | setOut |
D. | OutputSchema |
Answer» C. setOut | |
128. |
You can write to a single partition by specifying the partition key(s) and value(s) in the ___________ method. |
A. | setOutput |
B. | setOut |
C. | put |
D. | get |
Answer» B. setOut | |
129. |
_________ does not restrict contributions to Hadoop based implementations. |
A. | Mahout |
B. | Oozie |
C. | Impala |
D. | All of the mentioned |
Answer» B. Oozie | |
130. |
On the write side, it is expected that the user pass in valid _________ with data correctly. |
A. | HRecords |
B. | HCatRecos |
C. | HCatRecords |
D. | None of the mentioned |
Answer» D. None of the mentioned | |
131. |
SolrJ now has first class support for __________ API. |
A. | Compactions |
B. | Collections |
C. | Distribution |
D. | All of the mentioned |
Answer» C. Distribution | |
132. |
Apache _________ provides direct queries on self-describing and semi-structured data in files. |
A. | Drill |
B. | Mahout |
C. | Oozie |
D. | All of the mentioned |
Answer» B. Mahout | |
133. |
Groom servers starts up with a ________ instance and an RPC proxy to contact the bsp master. |
A. | RPC |
B. | BSPPeer |
C. | LPC |
D. | None of the mentioned |
Answer» C. LPC | |
134. |
Which of the following can be used to launch Spark jobs inside MapReduce? |
A. | SIM |
B. | SIMR |
C. | SIR |
D. | RIS |
Answer» C. SIR | |
135. |
___________ property allows us to specify a custom dir location pattern for all the writes, and will interpolate each variable. |
A. | hcat.dynamic.partitioning.custom.pattern |
B. | hcat.append.limit |
C. | hcat.pig.storer.external.location |
D. | hcatalog.hive.client.cache.expiry.time |
Answer» B. hcat.append.limit | |
136. |
Flume deploys as one or more agents, each contained within its own instance of _________ |
A. | JVM |
B. | Channels |
C. | Chunks |
D. | None of the mentioned |
Answer» B. Channels | |
137. |
Hama consist of mainly ________ components for large scale processing of graphs. |
A. | two |
B. | three |
C. | four |
D. | five |
Answer» C. four | |
138. |
Which of the following is true about oozie? |
A. | Oozie is an Open Source |
B. | Oozie is available under Apache license 2.0. |
C. | oozie manage Hadoop jobs in a distributed environment. |
D. | All of the above |
Answer» E. | |
139. |
________ is responsible for maintaining groom server status. |
A. | GroomServers |
B. | BSPMaster |
C. | Zookeeper |
D. | All of the mentioned |
Answer» C. Zookeeper | |
140. |
Hama was inspired by Google’s _________ large-scale graph computing framework. |
A. | Pragmatic |
B. | Pregel |
C. | Preghad |
D. | All of the mentioned |
Answer» C. Preghad | |
141. |
Which of the following apache project is gaining a lot of traction steadily with the efforts of its committers? |
A. | Hama |
B. | Hadoop |
C. | Hive |
D. | Pig |
Answer» B. Hadoop | |
142. |
The job is used to represented as Directed Acyclic Graphs (DAGs) to specify a sequence of actions to be executed. |
A. | Oozie Bundle Jobs |
B. | Oozie Coordinator Jobs |
C. | Oozie Workflow Jobs |
D. | All of the above |
Answer» D. All of the above | |
143. |
Which of the following is a more compact binary format? |
A. | TCompactProtocol |
B. | TDenseProtocol |
C. | TBinaryProtocol |
D. | TSimpleJSONProtocol |
Answer» B. TDenseProtocol | |
144. |
________ is a multi-threaded server using standard blocking I/O. |
A. | TNonblockingServer |
B. | TThreadPoolServer |
C. | TSimpleServer |
D. | None of the mentioned |
Answer» C. TSimpleServer | |
145. |
Flume carries data between? |
A. | sources and decorator |
B. | sources and sinks |
C. | start and decorator |
D. | decorator and sinks |
Answer» C. start and decorator | |
146. |
During merging, __________ now always checks the incoming segments for corruption before merging. |
A. | LocalWriter |
B. | IndexWriter |
C. | ReadWriter |
D. | All of the mentioned |
Answer» C. ReadWriter | |
147. |
In which year Apache Mahout started? |
A. | 2007 |
B. | 2008 |
C. | 2009 |
D. | 2010 |
Answer» C. 2009 | |
148. |
Which of the following is not a features of Mahout? |
A. | Mahout lets applications to analyze large sets of data effectively and in quick time. |
B. | Mahout includes matrix and vector libraries. |
C. | Mahout comes with distributed fitness function capabilities for evolutionary programming. |
D. | All of the above |
Answer» E. | |
149. |
Mahout provides ____________ libraries for common and primitive Java collections. |
A. | python |
B. | perl |
C. | java |
D. | C |
Answer» D. C | |
150. |
In Ambari User Views, Which user view is similar to the Hive View? |
A. | Tez View |
B. | Capacity Scheduler |
C. | Files View |
D. | Pig View |
Answer» E. | |