

MCQOPTIONS
Saved Bookmarks
This section includes 29 Mcqs, each offering curated multiple-choice questions to sharpen your Hadoop knowledge and support exam preparation. Choose a topic below to get started.
1. |
Which of the following is correct syntax for parameter substitution using cmd? |
A. | pig {-param param_name = param_value | -param_file file_name} [-debug | -dryrun] script |
B. | {%declare | %default} param_name param_value |
C. | {%declare | %default} param_name param_value cmd |
D. | All of the mentioned |
Answer» B. {%declare | %default} param_name param_value | |
2. |
Which of the following scripts that generate more than three MapReduce jobs? |
A. | a = load '/mapred/history/done' using HadoopJobHistoryLoader() as (j:map[], m:map[], r:map[]); |
B. | a = load '/mapred/history/done' using HadoopJobHistoryLoader() as (j:map[], m:map[], r:map[]); |
C. | a = load '/mapred/history/done' using HadoopJobHistoryLoader() as (j:map[], m:map[], r:map[]); |
D. | None of the mentioned |
Answer» B. a = load '/mapred/history/done' using HadoopJobHistoryLoader() as (j:map[], m:map[], r:map[]); | |
3. |
__________ method tells LoadFunc which fields are required in the Pig script. |
A. | pushProjection() |
B. | relativeToAbsolutePath() |
C. | prepareToRead() |
D. | none of the mentioned |
Answer» B. relativeToAbsolutePath() | |
4. |
A loader implementation should implement __________ if casts (implicit or explicit) from DataByteArray fields to other types need to be supported. |
A. | LoadPushDown |
B. | LoadMetadata |
C. | LoadCaster |
D. | All of the mentioned |
Answer» D. All of the mentioned | |
5. |
Which of the following is shortcut for DUMP operator? |
A. | \de alias |
B. | \d alias |
C. | \q |
D. | None of the mentioned |
Answer» C. \q | |
6. |
Which of the following code is used to find scripts that use only the default parallelism? |
A. | a = load '/mapred/history/done' using HadoopJobHistoryLoader() as (j:map[], m:map[], r:map[]); |
B. | a = load '/mapred/history/done' using HadoopJobHistoryLoader() as (j:map[], m:map[], r:map[]); |
C. | a = load '/mapred/history/done' using HadoopJobHistoryLoader() as (j:map[], m:map[], r:map[]); |
D. | None of the mentioned |
Answer» B. a = load '/mapred/history/done' using HadoopJobHistoryLoader() as (j:map[], m:map[], r:map[]); | |
7. |
In comparison to SQL, Pig uses ______________ |
A. | Lazy evaluation |
B. | ETL |
C. | Supports pipeline splits |
D. | All of the mentioned |
Answer» E. | |
8. |
Which of the following file contains user defined functions (UDFs)? |
A. | script2-local.pig |
B. | pig.jar |
C. | tutorial.jar |
D. | excite.log.bz2 |
Answer» D. excite.log.bz2 | |
9. |
Which of the following is/are a feature of Pig? |
A. | Rich set of operators |
B. | Ease of programming |
C. | Extensibility |
D. | All of the above |
Answer» E. | |
10. |
Which of the following script determines the number of scripts run by user and queue on a cluster? |
A. | a = load '/mapred/history/done' using HadoopJobHistoryLoader() as (j:map[], m:map[], r:map[]); |
B. | a = load '/mapred/history/done' using HadoopJobHistoryLoader() as (j:map[], m:map[], r:map[]); |
C. | a = load '/mapred/history/done' using HadoopJobHistoryLoader() as (j:map[], m:map[], r:map[]); |
D. | None of the mentioned |
Answer» B. a = load '/mapred/history/done' using HadoopJobHistoryLoader() as (j:map[], m:map[], r:map[]); | |
11. |
Which of the following command is used to show values to keys used in Pig? |
A. | set |
B. | declare |
C. | display |
D. | all of the mentioned |
Answer» B. declare | |
12. |
PigUnit runs in Pig’s _______ mode by default. |
A. | local |
B. | tez |
C. | mapreduce |
D. | none of the mentioned |
Answer» B. tez | |
13. |
Which of the following is an entry in jobconf? |
A. | pig.job |
B. | pig.input.dirs |
C. | pig.feature |
D. | none of the mentioned |
Answer» C. pig.feature | |
14. |
Which of the following company has developed PIG? |
A. | |
B. | Yahoo |
C. | Microsoft |
D. | Apple |
Answer» C. Microsoft | |
15. |
Which of the following find the running time of each script (in seconds)? |
A. | a = load '/mapred/history/done' using HadoopJobHistoryLoader() as (j:map[], m:map[], r:map[]); |
B. | a = load '/mapred/history/done' using HadoopJobHistoryLoader() as (j:map[], m:map[], r:map[]); |
C. | a = load '/mapred/history/done' using HadoopJobHistoryLoader() as (j:map[], m:map[], r:map[]); |
D. | All of the mentioned |
Answer» B. a = load '/mapred/history/done' using HadoopJobHistoryLoader() as (j:map[], m:map[], r:map[]); | |
16. |
Which of the following has methods to deal with metadata? |
A. | LoadPushDown |
B. | LoadMetadata |
C. | LoadCaster |
D. | All of the mentioned |
Answer» C. LoadCaster | |
17. |
In which year apache Pig was released? |
A. | 2005 |
B. | 2006 |
C. | 2007 |
D. | 2008 |
Answer» C. 2007 | |
18. |
Which of the following command can be used for debugging? |
A. | exec |
B. | execute |
C. | error |
D. | throw |
Answer» B. execute | |
19. |
Which of the following is not true about Pig? |
A. | Apache Pig is an abstraction over MapReduce |
B. | Pig can not perform all the data manipulation operations in Hadoop. |
C. | Pig is a tool/platform which is used to analyze larger sets of data representing them as data flows. |
D. | None of the above |
Answer» C. Pig is a tool/platform which is used to analyze larger sets of data representing them as data flows. | |
20. |
Which of the following script is used to check scripts that have failed jobs? |
A. | a = load '/mapred/history/done' using HadoopJobHistoryLoader() as (j:map[], m:map[], r:map[]); |
B. | a = load '/mapred/history/done' using HadoopJobHistoryLoader() as (j:map[], m:map[], r:map[]); |
C. | a = load '/mapred/history/done' using HadoopJobHistoryLoader() as (j:map[], m:map[], r:map[]); |
D. | None of the mentioned |
Answer» B. a = load '/mapred/history/done' using HadoopJobHistoryLoader() as (j:map[], m:map[], r:map[]); | |
21. |
Which of the following will compile the Pigunit? |
A. | $pig_trunk ant pigunit-jar |
B. | $pig_tr ant pigunit-jar |
C. | $pig_ ant pigunit-jar |
D. | None of the mentioned |
Answer» B. $pig_tr ant pigunit-jar | |
22. |
Which of the following will run pig in local mode? |
A. | $ pig -x local … |
B. | $ pig -x tez_local … |
C. | $ pig … |
D. | None of the mentioned |
Answer» B. $ pig -x tez_local … | |
23. |
Which of the following is the default mode? |
A. | Mapreduce |
B. | Tez |
C. | Local |
D. | All of the mentioned |
Answer» B. Tez | |
24. |
Pig Latin statements are generally organized in one of the following ways? |
A. | A LOAD statement to read data from the file system |
B. | A series of “transformation” statements to process the data |
C. | A DUMP statement to view results or a STORE statement to save the results |
D. | All of the mentioned |
Answer» E. | |
25. |
You can run Pig in batch mode using __________ |
A. | Pig shell command |
B. | Pig scripts |
C. | Pig options |
D. | All of the mentioned |
Answer» C. Pig options | |
26. |
Which of the following function is used to read data in PIG? |
A. | WRITE |
B. | READ |
C. | LOAD |
D. | None of the mentioned |
Answer» D. None of the mentioned | |
27. |
Pig operates in mainly how many nodes? |
A. | Two |
B. | Three |
C. | Four |
D. | Five |
Answer» B. Three | |
28. |
_________ operator is used to review the schema of a relation. |
A. | DUMP |
B. | DESCRIBE |
C. | STORE |
D. | EXPLAIN |
Answer» C. STORE | |
29. |
Which of the following operator is used to view the map reduce execution plans? |
A. | DUMP |
B. | DESCRIBE |
C. | STORE |
D. | EXPLAIN |
Answer» E. | |