Explore topic-wise MCQs in Neural Networks.

This section includes 12 Mcqs, each offering curated multiple-choice questions to sharpen your Neural Networks knowledge and support exam preparation. Choose a topic below to get started.

1.

How can learning process be stopped in backpropagation rule?

A. there is convergence involved
B. no heuristic criteria exist
C. on basis of average gradient value
D. none of the mentioned
Answer» D. none of the mentioned
2.

Does backpropagaion learning is based on gradient descent along error surface?

A. yes
B. no
C. cannot be said
D. it depends on gradient descent but not error surface
Answer» B. no
3.

What is meant by generalized in statement “backpropagation is a generalized delta rule” ?

A. because delta rule can be extended to hidden layer units
B. because delta is applied to only input and output layers, thus making it more simple and generalized
C. it has no significance
D. none of the mentioned
Answer» B. because delta is applied to only input and output layers, thus making it more simple and generalized
4.

DOES_BACKPROPAGAION_LEARNING_IS_BASED_ON_GRADIENT_DESCENT_ALONG_ERROR_SURFACE??$

A. yes
B. no
C. cannot be said
D. it depends on gradient descent but not error surface
Answer» B. no
5.

How_can_learning_process_be_stopped_in_backpropagation_rule?$

A. there is convergence involved
B. no heuristic criteria exist
C. on basis of average gradient value
D. none of the mentioned
Answer» D. none of the mentioned
6.

What are the general tasks that are performed with backpropagation algorithm?

A. pattern mapping
B. function approximation
C. prediction
D. all of the mentioned
Answer» E.
7.

What are general limitations of back propagation rule?

A. local minima problem
B. slow convergence
C. scaling
D. all of the mentioned
Answer» E.
8.

What is meant by generalized in statement “backpropagation is a generalized delta rule” ?$

A. because delta rule can be extended to hidden layer units
B. because delta is applied to only input and output layers, thus making it more simple and generalized
C. it has no significance
D. none of the mentioned
Answer» B. because delta is applied to only input and output layers, thus making it more simple and generalized
9.

There is feedback in final stage of backpropagation algorithm?

A. yes
B. no
Answer» C.
10.

What is true regarding backpropagation rule?

A. it is also called generalized delta rule
B. error in output is propagated backwards only to determine weight updates
C. there is no feedback of signal at nay stage
D. all of the mentioned
Answer» E.
11.

The backpropagation law is also known as generalized delta rule, is it true?

A. yes
B. no
Answer» B. no
12.

What is the objective of backpropagation algorithm?

A. to develop learning algorithm for multilayer feedforward neural network
B. to develop learning algorithm for single layer feedforward neural network
C. to develop learning algorithm for multilayer feedforward neural network, so that network can be trained to capture the mapping implicitly
D. none of the mentioned
Answer» D. none of the mentioned