Explore topic-wise MCQs in Artificial Intelligence.

This section includes 940 Mcqs, each offering curated multiple-choice questions to sharpen your Artificial Intelligence knowledge and support exam preparation. Choose a topic below to get started.

51.

Support vectors are the data points that lie closest to the decision surface.

A. true
B. false
Answer» B. false
52.

We usually use feature normalization before using the Gaussian kernel in SVM. What is true about feature normalization? 1.We do feature normalization so that new feature will dominate other 2. Some times, feature normalization is not feasible in case of categorical variables3. Feature normalization always helps when we use Gaussian kernel in SVM

A. 1
B. 1 and 2
C. 1 and 3
D. 2 and 3
Answer» C. 1 and 3
53.

What do you mean by generalization error in terms of the SVM?

A. how far the hyperplane is from the support vectors
B. how accurately the svm can predict outcomes for unseen data
C. the threshold amount of error in an svm
Answer» C. the threshold amount of error in an svm
54.

The minimum time complexity for training an SVM is O(n2). According to this fact, what sizes of datasets are not best suited for SVMs?

A. large datasets
B. small datasets
C. medium sized datasets
D. size does not matter
Answer» B. small datasets
55.

Suppose you have trained an SVM with linear decision boundary after training SVM, you correctly infer that your SVM model is under fitting.Which of the following option would you more likely to consider iterating SVM next time?

A. you want to increase your data points
B. you want to decrease your data points
C. you will try to calculate more variables
D. you will try to reduce the features
Answer» D. you will try to reduce the features
56.

For the given weather data, Calculate probability of not playing

A. 0.4
B. 0.64
C. 0.36
D. 0.5
Answer» D. 0.5
57.

Problem:Players will play if weather is sunny. Is this statement is correct?

A. true
B. false
Answer» B. false
58.

Suppose that we have N independent variables (X1,X2 Xn) and dependent variable is Y. Now Imagine that you are applying linear regression by fitting the best fit line using least square error on this data. You found that correlation coefficient for one of its variable(Say X1) with Y is -0.95.Which of the following is true for X1?

A. relation between the x1 and y is weak
B. relation between the x1 and y is strong
C. relation between the x1 and y is neutral
D. correlation cant judge the relationship
Answer» C. relation between the x1 and y is neutral
59.

Function used for linear regression in R

A. lm(formula, data)
B. lr(formula, data)
C. lrm(formula, data)
D. regression.linear(formula,
Answer» B. lr(formula, data)
60.

Which of the following method(s) does not have closed form solution for its coefficients?

A. ridge regression
B. lasso
C. both ridge and lasso
D. none of both
Answer» C. both ridge and lasso
61.

What is/are true about ridge regression?1. When lambda is 0, model works like linear regression model2. When lambda is 0, model doesnt work like linear regression model3. When lambda goes to infinity, we get very, very small coefficients approaching 04. When lambda goes to infinity, we get very, very large coefficients approaching infinity

A. 1 and 3
B. 1 and 4
C. 2 and 3
D. 2 and 4
Answer» B. 1 and 4
62.

How does number of observations influence overfitting? Choose the correct answer(s).Note: Rest all parameters are same1. In case of fewer observations, it is easy to overfit the data.2. In case of fewer observations, it is hard to overfit the data.3. In case of more observations, it is easy to overfit the data.4. In case of more observations, it is hard to overfit the data.

A. 1 and 4
B. 2 and 3
C. 1 and 3
D. none of theses
Answer» B. 2 and 3
63.

It's possible to specify if the scaling process must include both mean and standard deviation using the parameters              .

A. with_mean=true/false
B. with_std=true/false
C. both a & b
D. none of the mentioned
Answer» D. none of the mentioned
64.

is the most drastic one and should be considered only when the dataset is quite large, the number of missing features is high, and any prediction could be risky.

A. removing the whole line
B. creating sub-model to predict those features
C. using an automatic strategy to input them according to the other known values
D. all above
Answer» B. creating sub-model to predict those features
65.

In which of the following each categorical label is first turned into a positive integer and then transformed into a vector where only one feature is 1 while all the others are 0.

A. labelencoder class
B. dictvectorizer
C. labelbinarizer class
D. featurehasher
Answer» D. featurehasher
66.

In many classification problems, the target dataset is made up of categorical labels which cannot immediately be processed by any algorithm. An encoding is needed and scikit-learn offers atleast         valid options

A. 1
B. 2
C. 3
D. 4
Answer» C. 3
67.

scikit-learn also provides functions for creatingdummy datasets from scratch:

A. make_classification()
B. make_regression()
C. make_blobs()
D. all above
Answer» E.
68.

Which of the following method is used to find the optimal features for cluster analysis

A. k-means
B. density-based spatial clustering
C. spectral clustering find clusters
D. all above
Answer» E.
69.

Which of the following sentence is FALSE regarding regression?

A. it relates inputs to outputs.
B. it is used for prediction.
C. it may be used for interpretation.
D. it discovers causal relationships.
Answer» E.
70.

Lets say, you are working with categorical feature(s) and you have not looked at the distribution of the categorical variable in the test data.You want to apply one hot encoding (OHE) on the categorical feature(s). What challenges you may face if you have applied OHE on a categorical variable of train dataset?

A. all categories of categorical variable are not present in the test dataset.
B. frequency distribution of categories is different in train as compared to the test dataset.
C. train and test always have same distribution.
D. both a and b
Answer» E.
71.

Reinforcement learning is particularly

A. the environment is not
B. it\s often very dynamic
C. it\s impossible to have a
D. all above
Answer» E.
72.

In reinforcement learning, this feedback is usually called as     .

A. overfitting
B. overlearning
C. reward
D. none of above
Answer» D. none of above
73.

For the given weather data, Calculate probability of playing

A. 0.4
B. 0.64
C. 0.29
D. 0.75
Answer» C. 0.29
74.

100 people are at party. Given data gives information about how many wear pink or not, and if a man or not. Imagine a pink wearing guest leaves, was it a man?

A. true
B. false
Answer» B. false
75.

In a real problem, you should check to see if the SVM is separable and then include slack variables if it is not separable.

A. true
B. false
Answer» C.
76.

showed better performance than other approaches, even without a context- based model

A. machine learning
B. deep learning
C. reinforcement learning
D. supervised learning
Answer» C. reinforcement learning
77.

there's a growing interest in pattern recognition and associative memories whose structure and functioning are similar to what happens in the neocortex. Such an approach also allows simpler algorithms called

A. regression
B. accuracy
C. modelfree
D. scalable
Answer» D. scalable
78.

In the last decade, many researchers started training bigger and bigger models, built with several different layers that's why this approach is called         .

A. deep learning
B. machine learning
C. reinforcement learning
D. unsupervised learning
Answer» B. machine learning
79.

In reinforcement learning, this feedback is

A. overfitting
B. overlearning
C. reward
D. none of above
Answer» D. none of above
80.

can be adopted when it's necessary to categorize a large amount of data with a few complete examples or when there's the need to impose some constraints to a clustering algorithm.

A. supervised
B. semi-supervised
C. reinforcement
D. clusters
Answer» C. reinforcement
81.

Which of the following isnotsupervised learning?

A. pca
B. decision tree
C. naive bayesian
D. linerar regression
Answer» B. decision tree
82.

What is the purpose of performing cross- validation?

A. a. to assess the predictive performance of the models
B. b. to judge how the trained model performs outside the sample on test data
C. c. both a and b
Answer» D.
83.

In SVR we try to fit the error within a

A. true
B. false
Answer» B. false
84.

In SVM, Kernel function is used to map a lower dimensional data into a higher dimensional data.

A. true
B. false
Answer» B. false
85.

SVMalgorithmsusea set of mathematical functions that are defined as thekernel.

A. true
B. false
Answer» B. false
86.

The          of the hyperplane depends upon the number of features.

A. dimension
B. classification
C. reduction
Answer» B. classification
87.

Hyperplanes are                        boundaries that help classify the data points.

A. usual
B. decision
C. parallel
Answer» C. parallel
88.

The objective of the support vector machine algorithm is to find a hyperplane in an N-dimensional space(N the number of features) that distinctly classifies the data points.

A. true
B. false
Answer» B. false
89.

SVM can solvelinearand non- linearproblems

A. true
B. false
Answer» B. false
90.

Suppose you are building a SVM model on data X. The data X can be error prone which means that you should not trust any specific data point too much. Now think that you want to build a SVM model which has quadratic kernel function of polynomial degree 2 that uses Slack variable C as one of its hyper parameter.What would happen when you use very large value of C(C->infinity)?

A. we can still classify data correctly for given setting of hyper parameter c
B. we can not classify data correctly for given setting of hyper parameter c
C. cant say
D. none of these
Answer» B. we can not classify data correctly for given setting of hyper parameter c
91.

If you remove the non-red circled points from the data, the decision boundary will change?

A. true
B. false
Answer» C.
92.

Suppose you are using a Linear SVM classifier with 2 class classification

A. yes
B. no
Answer» B. no
93.

Which of the following option is true regarding Regression andCorrelation ?Note: y is dependent variable and x is independent variable.

A. a. the relationship is symmetric between x and y in both.
B. b. the relationship is not symmetric between x and y in both.
C. c. the relationship is not symmetric between x and y in case of correlation but in case of regression it is symmetric.
D. d. the relationship is symmetric between x and y in case of correlation but in case of regression it is not symmetric.
Answer» E.
94.

Correlated variables can have zero correlation coeffficient. True or False?

A. a. true
B. b. false
Answer» B. b. false
95.

We can also compute the coefficient of linear regression with the help of an analytical method called Normal Equation. Which of the following is/are true about Normal Equation?1. We dont have to choose the learning rate2. It becomes slow when number of features is very large3. No need to iterate

A. a. 1 and 2
B. b. 1 and 3.
C. c. 2 and 3.
D. d. 1,2 and 3.
Answer» E.
96.

Which of the following statement(s) can

A. a. 1 and 2
B. b. 1 and 3
C. c. 2 and 4
D. d. none of the above
Answer» B. b. 1 and 3
97.

Suppose we fit Lasso Regression to a data set, which has 100 features (X1,X2X100). Now, we rescale one of these feature by multiplying with 10 (say that feature is X1), and then refit Lasso regression with the same regularization parameter.Now, which of the following option will be correct?

A. a. it is more likely for x1 to be excluded from the model
B. b. it is more likely for x1 to be included in the model
C. c. cant say
D. d. none of these
Answer» C. c. cant say
98.

Suppose you are training a linear regression model. Now consider these points.1. Overfitting is more likely if we have less data2. Overfitting is more likely when the hypothesis space is small.Which of the above statement(s) are correct?

A. a. both are false
B. b. 1 is false and 2 is true
C. c. 1 is true and 2 is false
D. d. both are true
Answer» D. d. both are true
99.

Generally, which of the following method(s) is used for predicting continuous dependent variable?1. Linear Regression2. Logistic Regression

A. a. 1 and 2
B. b. only 1
C. c. only 2
D. d. none of these.
Answer» C. c. only 2
100.

Lets say, a Linear regression model perfectly fits the training data (train error

A. a. you will always have test error zero
B. b. you can not have test error zero
C. c. none of the above
Answer» D.