

MCQOPTIONS
Saved Bookmarks
This section includes 607 Mcqs, each offering curated multiple-choice questions to sharpen your Computer Science Engineering (CSE) knowledge and support exam preparation. Choose a topic below to get started.
101. |
Identify the various approaches for machine learning. |
A. | concept vs classification learning |
B. | symbolic vs statistical learning |
C. | inductive vs analytical learning |
D. | all above |
Answer» E. | |
102. |
what is the function of Unsupervised Learning? |
A. | find clusters of the data and find low-dimensional representations of the data |
B. | find interesting directions in data and find novel observations/ database cleaning |
C. | interesting coordinates and correlations |
D. | all |
Answer» E. | |
103. |
What are the two methods used for the calibration in Supervised Learning? |
A. | platt calibration and isotonic regression |
B. | statistics and informal retrieval |
Answer» B. statistics and informal retrieval | |
104. |
Which of the following are several models for feature extraction |
A. | regression |
B. | classification |
C. | none of the above |
Answer» D. | |
105. |
Lets say, a Linear regression model perfectly fits the training data (train error |
A. | you will always have test error zero |
B. | you can not have test error zero |
C. | none of the above |
Answer» D. | |
106. |
In a linear regression problem, we are using R-squared to measure goodness-of-fit. We add a feature in linear regression model and retrain the same model.Which of the following option is true? |
A. | a. if r squared increases, this variable is significant. |
B. | b. if r squared decreases, this variable is not significant. |
C. | c. individually r squared cannot tell about variable importance. we cant say anything about it right now. |
D. | d. none of these. |
Answer» D. d. none of these. | |
107. |
Which of the one is true about Heteroskedasticity? |
A. | a. linear regression with varying error terms |
B. | b. linear regression with constant error terms |
C. | c. linear regression with zero error terms |
D. | d. none of these |
Answer» B. b. linear regression with constant error terms | |
108. |
Which of the following assumptions do we make while deriving linear regression parameters?1. The true relationship between dependent y and predictor x is linear2. The model errors are statistically independent3. The errors are normally distributed with a 0 mean and constant standard deviation4. The predictor x is non-stochastic and is measured error-free |
A. | a. 1,2 and 3. |
B. | b. 1,3 and 4. |
C. | c. 1 and 3. |
D. | d. all of above. |
Answer» E. | |
109. |
To test linear relationship of y(dependent) and x(independent) continuous variables, which of the following plot best suited? |
A. | a. scatter plot |
B. | b. barchart |
C. | c. histograms |
D. | d. none of these |
Answer» B. b. barchart | |
110. |
Which of the following is true about Ridge or Lasso regression methods in case of feature selection? |
A. | a. ridge regression uses subset selection of features |
B. | b. lasso regression uses subset selection of features |
C. | c. both use subset selection of features |
D. | d. none of above |
Answer» C. c. both use subset selection of features | |
111. |
Which of the following statement(s) can |
A. | a. 1 and 2 |
B. | b. 1 and 3 |
C. | c. 2 and 4 |
D. | d. none of the above |
Answer» B. b. 1 and 3 | |
112. |
Suppose you are using a Linear SVM classifier with 2 class classification |
A. | yes |
B. | no |
Answer» B. no | |
113. |
If you remove the non-red circled points from the data, the decision boundary will change? |
A. | true |
B. | false |
Answer» C. | |
114. |
When the C parameter is set to infinite, which of the following holds true? |
A. | the optimal hyperplane if exists, will be the one that completely separates the data |
B. | the soft-margin classifier will separate the data |
C. | none of the above |
Answer» B. the soft-margin classifier will separate the data | |
115. |
Suppose you are building a SVM model on data X. The data X can be error prone which means that you should not trust any specific data point too much. Now think that you want to build a SVM model which has quadratic kernel function of polynomial degree 2 that uses Slack variable C as one of its hyper parameter.What would happen when you use very large value of C(C->infinity)? |
A. | we can still classify data correctly for given setting of hyper parameter c |
B. | we can not classify data correctly for given setting of hyper parameter c |
C. | cant say |
D. | none of these |
Answer» B. we can not classify data correctly for given setting of hyper parameter c | |
116. |
SVM can solvelinearand non- linearproblems |
A. | true |
B. | false |
Answer» B. false | |
117. |
The objective of the support vector machine algorithm is to find a hyperplane in an N-dimensional space(N the number of features) that distinctly classifies the data points. |
A. | true |
B. | false |
Answer» B. false | |
118. |
Hyperplanes are                        boundaries that help classify the data points. |
A. | usual |
B. | decision |
C. | parallel |
Answer» C. parallel | |
119. |
The          of the hyperplane depends upon the number of features. |
A. | dimension |
B. | classification |
C. | reduction |
Answer» B. classification | |
120. |
Hyperplanes are decision boundaries that help classify the data points. |
A. | true |
B. | false |
Answer» B. false | |
121. |
SVMalgorithmsusea set of mathematical functions that are defined as thekernel. |
A. | true |
B. | false |
Answer» B. false | |
122. |
In SVM, Kernel function is used to map a lower dimensional data into a higher dimensional data. |
A. | true |
B. | false |
Answer» B. false | |
123. |
In SVR we try to fit the error within a |
A. | true |
B. | false |
Answer» B. false | |
124. |
Which of the following isnotsupervised learning? |
A. | pca |
B. | decision tree |
C. | naive bayesian |
D. | linerar regression |
Answer» B. decision tree | |
125. |
           can be adopted when it's necessary to categorize a large amount of data with a few complete examples or when there's the need to impose some constraints to a clustering algorithm. |
A. | supervised |
B. | semi-supervised |
C. | reinforcement |
D. | clusters |
Answer» C. reinforcement | |
126. |
In reinforcement learning, this feedback is |
A. | overfitting |
B. | overlearning |
C. | reward |
D. | none of above |
Answer» D. none of above | |
127. |
In the last decade, many researchers started training bigger and bigger models, built with several different layers that's why this approach is called         . |
A. | deep learning |
B. | machine learning |
C. | reinforcement learning |
D. | unsupervised learning |
Answer» B. machine learning | |
128. |
there's a growing interest in pattern recognition and associative memories whose structure and functioning are similar to what happens in the neocortex. Such an approach also allows simpler algorithms called |
A. | regression |
B. | accuracy |
C. | modelfree |
D. | scalable |
Answer» D. scalable | |
129. |
            showed better performance than other approaches, even without a context- based model |
A. | machine learning |
B. | deep learning |
C. | reinforcement learning |
D. | supervised learning |
Answer» C. reinforcement learning | |
130. |
If two variables are correlated, is it necessary that they have a linear relationship? |
A. | yes |
B. | no |
Answer» C. | |
131. |
Correlated variables can have zero correlation coeffficient. True or False? |
A. | true |
B. | false |
Answer» B. false | |
132. |
Suppose we fit Lasso Regression to a data set, which has 100 features (X1,X2X100). Now, we rescale one of these feature by multiplying with 10 (say that feature is X1), and then refit Lasso regression with the same regularization parameter.Now, which of the following option will be correct? |
A. | it is more likely for x1 to be excluded from the model |
B. | it is more likely for x1 to be included in the model |
C. | cant say |
D. | none of these |
Answer» C. cant say | |
133. |
Suppose you are training a linear regression model. Now consider these points.1. Overfitting is more likely if we have less data2. Overfitting is more likely when the hypothesis space is small.Which of the above statement(s) are correct? |
A. | both are false |
B. | 1 is false and 2 is true |
C. | 1 is true and 2 is false |
D. | both are true |
Answer» D. both are true | |
134. |
We can also compute the coefficient of linear regression with the help of an analytical method called Normal Equation. Which of the following is/are true about Normal Equation?1. We dont have to choose the learning rate2. It becomes slow when number of features is very large3. No need to iterate |
A. | 1 and 2 |
B. | 1 and 3. |
C. | 2 and 3. |
D. | 1,2 and 3. |
Answer» E. | |
135. |
Which of the following option is true regarding Regression and Correlation ?Note: y is dependent variable and x is independent variable. |
A. | the relationship is symmetric between x and y in both. |
B. | the relationship is not symmetric between x and y in both. |
C. | the relationship is not symmetric between x and y in case of correlation but in case of regression it is symmetric. |
D. | the relationship is symmetric between x and y in case of correlation but in case of regression it is not symmetric. |
Answer» E. | |
136. |
Generally, which of the following method(s) is used for predicting continuous dependent variable?1. Linear Regression2. Logistic Regression |
A. | 1 and 2 |
B. | only 1 |
C. | only 2 |
D. | none of these. |
Answer» C. only 2 | |
137. |
In a real problem, you should check to see if the SVM is separable and then include slack variables if it is not separable. |
A. | true |
B. | false |
Answer» C. | |
138. |
100 people are at party. Given data gives information about how many wear pink or not, and if a man or not. Imagine a pink wearing guest leaves, was it a man? |
A. | true |
B. | false |
Answer» B. false | |
139. |
For the given weather data, Calculate probability of playing |
A. | 0.4 |
B. | 0.64 |
C. | 0.29 |
D. | 0.75 |
Answer» C. 0.29 | |
140. |
In SVR we try to fit the error within a certain threshold. |
A. | true |
B. | false |
Answer» B. false | |
141. |
In reinforcement learning, this feedback is usually called as     . |
A. | overfitting |
B. | overlearning |
C. | reward |
D. | none of above |
Answer» D. none of above | |
142. |
Reinforcement learning is particularly |
A. | the environment is not |
B. | it\s often very dynamic |
C. | it\s impossible to have a |
D. | all above |
Answer» E. | |
143. |
Lets say, you are working with categorical feature(s) and you have not looked at the distribution of the categorical variable in the test data. You want to apply one hot encoding (OHE) on the categorical feature(s). What challenges you may face if you have applied OHE on a categorical variable of train dataset? |
A. | all categories of categorical variable are not present in the test dataset. |
B. | frequency distribution of categories is different in train as compared to the test dataset. |
C. | train and test always have same distribution. |
D. | both a and b |
Answer» E. | |
144. |
Which of the following method is used to find the optimal features for cluster analysis |
A. | k-means |
B. | density-based spatial clustering |
C. | spectral clustering find clusters |
D. | all above |
Answer» E. | |
145. |
         which can accept a NumPy RandomState generator or an integer seed. |
A. | make_blobs |
B. | random_state |
C. | test_size |
D. | training_size |
Answer» C. test_size | |
146. |
In many classification problems, the target dataset is made up of categorical labels which cannot immediately be processed by any algorithm. An encoding is needed and scikit-learn offers at least         valid options |
A. | 1 |
B. | 2 |
C. | 3 |
D. | 4 |
Answer» C. 3 | |
147. |
In which of the following each categorical label is first turned into a positive integer and then transformed into a vector where only one feature is 1 while all the others are 0. |
A. | labelencoder class |
B. | dictvectorizer |
C. | labelbinarizer class |
D. | featurehasher |
Answer» D. featurehasher | |
148. |
           is the most drastic one and should be considered only when the dataset is quite large, the number of missing features is high, and any prediction could be risky. |
A. | removing the whole line |
B. | creating sub-model to predict those features |
C. | using an automatic strategy to input them according to the other known values |
D. | all above |
Answer» B. creating sub-model to predict those features | |
149. |
It's possible to specify if the scaling process must include both mean and standard deviation using the parameters              . |
A. | with_mean=true/false |
B. | with_std=true/false |
C. | both a & b |
D. | none of the mentioned |
Answer» D. none of the mentioned | |
150. |
How does number of observations influence overfitting? Choose the correct answer(s).Note: Rest all parameters are same1. In case of fewer observations, it is easy to overfit the data.2. In case of fewer observations, it is hard to overfit the data.3. In case of more observations, it is easy to overfit the data.4. In case of more observations, it is hard to overfit the data. |
A. | 1 and 4 |
B. | 2 and 3 |
C. | 1 and 3 |
D. | none of theses |
Answer» B. 2 and 3 | |