MCQOPTIONS
Saved Bookmarks
This section includes 607 Mcqs, each offering curated multiple-choice questions to sharpen your Computer Science Engineering (CSE) knowledge and support exam preparation. Choose a topic below to get started.
| 151. |
Suppose you have fitted a complex regression model on a dataset. Now, you are using Ridge regression with tuning parameter lambda to reduce its complexity. Choose the option(s) below which describes relationship of bias and variance with lambda. |
| A. | in case of very large lambda; bias is low, variance is low |
| B. | in case of very large lambda; bias is low, variance is high |
| C. | in case of very large lambda; bias is high, variance is low |
| D. | in case of very large lambda; bias is high, variance is high |
| Answer» D. in case of very large lambda; bias is high, variance is high | |
| 152. |
What is/are true about ridge regression?1. When lambda is 0, model works like linear regression model2. When lambda is 0, model doesnt work like linear regression model3. When lambda goes to infinity, we get very, very small coefficients approaching 04. When lambda goes to infinity, we get very, very large coefficients approaching infinity |
| A. | 1 and 3 |
| B. | 1 and 4 |
| C. | 2 and 3 |
| D. | 2 and 4 |
| Answer» B. 1 and 4 | |
| 153. |
Which of the following method(s) does not have closed form solution for its coefficients? |
| A. | ridge regression |
| B. | lasso |
| C. | both ridge and lasso |
| D. | none of both |
| Answer» C. both ridge and lasso | |
| 154. |
Function used for linear regression in R |
| A. | lm(formula, data) |
| B. | lr(formula, data) |
| C. | lrm(formula, data) |
| D. | regression.linear(formula, |
| Answer» B. lr(formula, data) | |
| 155. |
Suppose that we have N independent variables (X1,X2 Xn) and dependent variable is Y. Now Imagine that you are applying linear regression by fitting the best fit line using least square error on this data. You found that correlation coefficient for one of its variable(Say X1) with Y is -0.95.Which of the following is true for X1? |
| A. | relation between the x1 and y is weak |
| B. | relation between the x1 and y is strong |
| C. | relation between the x1 and y is neutral |
| D. | correlation cant judge the relationship |
| Answer» C. relation between the x1 and y is neutral | |
| 156. |
For the given weather data, Calculate probability of not playing |
| A. | 0.4 |
| B. | 0.64 |
| C. | 0.36 |
| D. | 0.5 |
| Answer» D. 0.5 | |
| 157. |
The minimum time complexity for training an SVM is O(n2). According to this fact, what sizes of datasets are not best suited for SVMs? |
| A. | large datasets |
| B. | small datasets |
| C. | medium sized datasets |
| D. | size does not matter |
| Answer» B. small datasets | |
| 158. |
Support vectors are the data points that lie closest to the decision surface. |
| A. | true |
| B. | false |
| Answer» B. false | |
| 159. |
Suppose you are using a Linear SVM classifier with 2 class classification problem. Now you have been given the following data in which some points are circled red that are representing support vectors.If you remove the following any one red points from the data. Does the decision boundary will change? |
| A. | yes |
| B. | no |
| Answer» B. no | |
| 160. |
Linear SVMs have no hyperparameters that need to be set by cross-validation |
| A. | true |
| B. | false |
| Answer» C. | |
| 161. |
For the given weather data, what is the probability that players will play if weather is sunny |
| A. | 0.5 |
| B. | 0.26 |
| C. | 0.73 |
| D. | 0.6 |
| Answer» E. | |
| 162. |
100 people are at party. Given data gives information about how many wear pink or not, and if a man or not. Imagine a pink wearing guest leaves, what is the probability of being a man |
| A. | 0.4 |
| B. | 0.2 |
| C. | 0.6 |
| D. | 0.45 |
| Answer» C. 0.6 | |
| 163. |
Linear SVMs have no hyperparameters |
| A. | true |
| B. | false |
| Answer» C. | |
| 164. |
            can be adopted when it's necessary to categorize a large amount of data with a few complete examples or when there's the need to |
| A. | supervised |
| B. | semi- supervised |
| C. | reinforcement |
| D. | clusters |
| Answer» C. reinforcement | |
| 165. |
In reinforcement learning, this feedback is usually called as      . |
| A. | overfitting |
| B. | overlearning |
| C. | reward |
| D. | none of above |
| Answer» D. none of above | |
| 166. |
In the last decade, many researchers started training bigger and bigger models, built with several different layers that's why this approach is called          . |
| A. | deep learning |
| B. | machine learning |
| C. | reinforcement learning |
| D. | unsupervised learning |
| Answer» B. machine learning | |
| 167. |
When it is necessary to allow the model to develop a generalization ability and avoid a common problem called            . |
| A. | overfitting |
| B. | overlearning |
| C. | classification |
| D. | regression |
| Answer» B. overlearning | |
| 168. |
Techniques involve the usage of both labeled and unlabeled data is called      . |
| A. | supervised |
| B. | semi- supervised |
| C. | unsupervised |
| D. | none of the above |
| Answer» C. unsupervised | |
| 169. |
there's a growing interest in pattern recognition and associative memories whose structure and functioning are similar to what happens in the neocortex. Such an |
| A. | regression |
| B. | accuracy |
| C. | modelfree |
| D. | scalable |
| Answer» D. scalable | |
| 170. |
             showed better performance than other approaches, even without a context-based model |
| A. | machine learning |
| B. | deep learning |
| C. | reinforcement learning |
| D. | supervised learning |
| Answer» C. reinforcement learning | |
| 171. |
What is ‘Overfitting’ in Machine learning? |
| A. | when a statistical model describes random error or noise instead of |
| B. | robots are programed so that they can perform the task based on data they gather from |
| C. | while involving the process of learning ‘overfitting’ occurs. |
| D. | a set of data is used to discover the potentially predictive relationship |
| Answer» B. robots are programed so that they can perform the task based on data they gather from | |
| 172. |
What is ‘Test set’? |
| A. | test set is used to test the accuracy of the hypotheses generated by the learner. |
| B. | it is a set of data is used to discover the potentially predictive relationship. |
| C. | both a & b |
| D. | none of above |
| Answer» B. it is a set of data is used to discover the potentially predictive relationship. | |
| 173. |
what is the function of ‘Supervised Learning’? |
| A. | classifications, predict time series, annotate strings |
| B. | speech recognition, regression |
| C. | both a & b |
| D. | none of above |
| Answer» D. none of above | |
| 174. |
Reinforcement learning is particularly efficient when                             . |
| A. | the environment is not completely deterministic |
| B. | it\s often very dynamic |
| C. | it\s impossible to have a precise error measure |
| D. | all above |
| Answer» E. | |
| 175. |
During the last few years, many              algorithms have been applied to deep neural networks to learn the best policy for playing Atari video games and to teach an agent how to associate the right action with an input representing the state. |
| A. | logical |
| B. | classical |
| C. | classification |
| D. | none of above |
| Answer» E. | |
| 176. |
if there is only a discrete number of possible outcomes (called categories), the process becomes a            . |
| A. | regression |
| B. | classification. |
| C. | modelfree |
| D. | categories |
| Answer» C. modelfree | |
| 177. |
Which of the following sentence is FALSE regarding regression? |
| A. | it relates inputs to outputs. |
| B. | it is used for prediction. |
| C. | it may be used for interpretation. |
| D. | it discovers causal relationships. |
| Answer» E. | |
| 178. |
scikit-learn also provides functions for creating dummy datasets from scratch: |
| A. | make_classifica tion() |
| B. | make_regressio n() |
| C. | make_blobs() |
| D. | all above |
| Answer» E. | |
| 179. |
In many classification problems, the target dataset is made up of categorical labels which cannot immediately be processed by any algorithm. An encoding is needed and scikit-learn offers at least          valid options |
| A. | 1 |
| B. | 2 |
| C. | 3 |
| D. | 4 |
| Answer» C. 3 | |
| 180. |
            is the most drastic one and should be considered only when the dataset is quite large, the number of missing features is high, and any prediction could be risky. |
| A. | removing the whole line |
| B. | creating sub- model to predict those features |
| C. | using an automatic strategy to input them according to the other known values |
| D. | all above |
| Answer» B. creating sub- model to predict those features | |
| 181. |
It's possible to specify if the scaling process must include both mean and standard deviation using the parameters                 . |
| A. | with_mean=tru e/false |
| B. | with_std=true/ false |
| C. | both a & b |
| D. | none of the mentioned |
| Answer» D. none of the mentioned | |
| 182. |
Which of the following selects the best K high-score features. |
| A. | selectpercentil e |
| B. | featurehasher |
| C. | selectkbest |
| D. | all above |
| Answer» D. all above | |
| 183. |
Suppose you have fitted a complex regression model on a dataset. Now, you are using Ridge regression with tuning parameter lambda to reduce its complexity. Choose the option(s) below which describes relationship of bias and variance with lambda. |
| A. | in case of very large lambda; bias is low, variance is low |
| B. | in case of very large lambda; bias is low, variance is high |
| C. | in case of very large lambda; bias is high, variance is low |
| D. | in case of very large lambda; bias is high, variance is high |
| Answer» D. in case of very large lambda; bias is high, variance is high | |
| 184. |
Which of the following method(s) does not have closed form solution for its coefficients? |
| A. | ridge regression |
| B. | lasso |
| C. | both ridge and lasso |
| D. | none of both |
| Answer» C. both ridge and lasso | |
| 185. |
In the mathematical Equation of Linear Regression Y = β1 + β2X + ϵ, (β1, β2) refers to |
| A. | (x-intercept, slope) |
| B. | (slope, x- intercept) |
| C. | (y-intercept, slope) |
| D. | (slope, y- intercept) |
| Answer» D. (slope, y- intercept) | |
| 186. |
Suppose that we have N independent variables (X1,X2… Xn) and dependent variable is Y. Now Imagine that you are applying linear regression by fitting the best fit line using least square error on this data. You found that correlation coefficient for one of it’s variable(Say X1) with Y is -0.95.Which of the following is true for X1? |
| A. | relation between the x1 and y is weak |
| B. | relation between the x1 and y is strong |
| C. | relation between the x1 and y is neutral |
| D. | correlation can’t judge the relationship |
| Answer» C. relation between the x1 and y is neutral | |
| 187. |
For the given weather data, Calculate probability of not playing |
| A. | 0.4 |
| B. | 0.64 |
| C. | 0.36 |
| D. | 0.5 |
| Answer» D. 0.5 | |
| 188. |
Suppose, you got a situation where you find that your linear regression model is under fitting the data. In such situation which of the following options would you consider?1. I will add more variables2. I will start introducing polynomial degree variables3. I will remove some variables |
| A. | 1 and 2 |
| B. | 2 and 3 |
| C. | 1 and 3 |
| D. | 1, 2 and 3 |
| Answer» B. 2 and 3 | |
| 189. |
Problem: Players will play if weather is sunny. Is this statement is correct? |
| A. | true |
| B. | false |
| Answer» B. false | |
| 190. |
Multinomial Naïve Bayes Classifier is     _                distribution |
| A. | continuous |
| B. | discrete |
| C. | binary |
| Answer» C. binary | |
| 191. |
Which of the following is not supervised learning? |
| A. | pca |
| B. | decision tree |
| C. | naive bayesian |
| D. | linerar regression |
| Answer» B. decision tree | |
| 192. |
Support vectors are the data points that lie closest to the decision surface. |
| A. | true |
| B. | false |
| Answer» B. false | |
| 193. |
Gaussian Naïve Bayes Classifier is     _                distribution |
| A. | continuous |
| B. | discrete |
| C. | binary |
| Answer» B. discrete | |
| 194. |
What is the purpose of performing cross- validation? |
| A. | to assess the predictive performance of the models |
| B. | to judge how the trained model performs outside the sample on test data |
| C. | both a and b |
| Answer» D. | |
| 195. |
Suppose you are using a Linear SVM classifier with 2 class classification problem. Now you have been given the following data in which some points are circled red that are representing support vectors.If you remove the following any one red points from the data. Does the decision boundary will change? |
| A. | yes |
| B. | no |
| Answer» B. no | |
| 196. |
Linear SVMs have no hyperparameters that need to be set by cross-validation |
| A. | true |
| B. | false |
| Answer» C. | |
| 197. |
For the given weather data, what is the probability that players will play if weather is sunny |
| A. | 0.5 |
| B. | 0.26 |
| C. | 0.73 |
| D. | 0.6 |
| Answer» E. | |
| 198. |
100 people are at party. Given data gives information about how many wear pink or not, and if a man or not. Imagine a pink wearing guest leaves, what is the probability of being a man |
| A. | 0.4 |
| B. | 0.2 |
| C. | 0.6 |
| D. | 0.45 |
| Answer» C. 0.6 | |
| 199. |
Problem: Players will play if weather is sunny. Is t |
| A. | true |
| B. | false |
| Answer» B. false | |
| 200. |
For the given weather data, Calculate probability |
| A. | 0.4 |
| B. | 0.64 |
| C. | 0.36 |
| D. | 0.5 |
| Answer» D. 0.5 | |