

MCQOPTIONS
Saved Bookmarks
This section includes 607 Mcqs, each offering curated multiple-choice questions to sharpen your Computer Science Engineering (CSE) knowledge and support exam preparation. Choose a topic below to get started.
51. |
Suppose you are building a SVM model on data X. The data X can be error prone which means that you should not trust any specific data point too much. Now think that you want to build a SVM model which has quadratic kernel function of polynomial degree 2 that uses Slack variable C as one of it’s hyper parameter.What would happen when you use very small C (C~0)? |
A. | Misclassification would happen |
B. | Data will be correctly classified |
C. | Can’t say |
D. | None of these |
Answer» B. Data will be correctly classified | |
52. |
If there is only a discrete number of possible outcomes called _____. |
A. | Modelfree |
B. | Categories |
C. | Prediction |
D. | None of above |
Answer» C. Prediction | |
53. |
Some people are using the term ___ instead of prediction only to avoid the weird idea that machine learning is a sort of modern magic. |
A. | Inference |
B. | Interference |
C. | Accuracy |
D. | None of above |
Answer» B. Interference | |
54. |
The term _____ can be freely used, but with the same meaning adopted in physics or system theory. |
A. | Accuracy |
B. | Cluster |
C. | Regression |
D. | Prediction |
Answer» E. | |
55. |
Common deep learning applications / problems can also be solved using____ |
A. | Real-time visual object identification |
B. | Classic approaches |
C. | Automatic labeling |
D. | Bio-inspired adaptive systems |
Answer» C. Automatic labeling | |
56. |
what is the function of ‘Unsupervised Learning’? |
A. | Find clusters of the data and find low-dimensional representations of the data |
B. | Find interesting directions in data and find novel observations/ database cleaning |
C. | Interesting coordinates and correlations |
D. | All |
Answer» E. | |
57. |
Let’s say, a “Linear regression†model perfectly fits the training data (train error is zero). Now, Which of the following statement is true? |
A. | You will always have test error zero |
B. | You can not have test error zero |
C. | None of the above |
Answer» D. | |
58. |
In a linear regression problem, we are using “R-squared†to measure goodness-of-fit. We add a feature in linear regression model and retrain the same model.Which of the following option is true? |
A. | A. If R Squared increases, this variable is significant. |
B. | B. If R Squared decreases, this variable is not significant. |
C. | C. Individually R squared cannot tell about variable importance. We can’t say anything about it right now. |
D. | D. None of these. |
Answer» D. D. None of these. | |
59. |
Which of the following is true about “Ridge†or “Lasso†regression methods in case of feature selection? |
A. | A. Ridge regression uses subset selection of features |
B. | B. Lasso regression uses subset selection of features |
C. | C. Both use subset selection of features |
D. | D. None of above |
Answer» C. C. Both use subset selection of features | |
60. |
Which of the following statement(s) can be true post adding a variable in a linear regression model?1. R-Squared and Adjusted R-squared both increase2. R-Squared increases and Adjusted R-squared decreases3. R-Squared decreases and Adjusted R-squared decreases4. R-Squared decreases and Adjusted R-squared increases |
A. | A. 1 and 2 |
B. | B. 1 and 3 |
C. | C. 2 and 4 |
D. | D. None of the above |
Answer» B. B. 1 and 3 | |
61. |
Suppose you are building a SVM model on data X. The data X can be error prone which means that you should not trust any specific data point too much. Now think that you want to build a SVM model which has quadratic kernel function of polynomial degree 2 that uses Slack variable C as one of it’s hyper parameter.What would happen when you use very large value of C(C->infinity)? |
A. | We can still classify data correctly for given setting of hyper parameter C |
B. | We can not classify data correctly for given setting of hyper parameter C |
C. | Can’t Say |
D. | None of these |
Answer» B. We can not classify data correctly for given setting of hyper parameter C | |
62. |
SVM can solve linear and non-linear problems |
A. | true |
B. | false |
Answer» B. false | |
63. |
The objective of the support vector machine algorithm is to find a hyperplane in an N-dimensional space(N — the number of features) that distinctly classifies the data points. |
A. | true |
B. | false |
Answer» B. false | |
64. |
Hyperplanes are _____________boundaries that help classify the data points. |
A. | usual |
B. | decision |
C. | parallel |
Answer» C. parallel | |
65. |
The _____of the hyperplane depends upon the number of features. |
A. | dimension |
B. | classification |
C. | reduction |
Answer» B. classification | |
66. |
SVM algorithms use a set of mathematical functions that are defined as the kernel. |
A. | true |
B. | false |
Answer» B. false | |
67. |
Which of the following is not supervised learning? |
A. | Â Â PCA |
B. | Â Â Decision Tree |
C. | Â Â Naive Bayesian |
D. | Linerar regression |
Answer» B. Â Â Decision Tree | |
68. |
What is the purpose of performing cross-validation? |
A. | To assess the predictive performance of the models |
B. | To judge how the trained model performs outside the sample on test data |
C. | Both A and B |
Answer» D. | |
69. |
______can be adopted when it's necessary to categorize a large amount of data with a few complete examples or when there's the need to impose some constraints to a clustering algorithm. |
A. | Supervised |
B. | Semi-supervised |
C. | Reinforcement |
D. | Clusters |
Answer» C. Reinforcement | |
70. |
In reinforcement learning, this feedback is usually called as___. |
A. | Overfitting |
B. | Overlearning |
C. | Reward |
D. | None of above |
Answer» D. None of above | |
71. |
In the last decade, many researchers started training bigger and bigger models, built with several different layers that's why this approach is called_____. |
A. | Deep learning |
B. | Machine learning |
C. | Reinforcement learning |
D. | Unsupervised learning |
Answer» B. Machine learning | |
72. |
 If two variables are correlated, is it necessary that they have a linear relationship? |
A. | Yes |
B. | No |
Answer» C. | |
73. |
there's a growing interest in pattern recognition and associative memories whose structure and functioning are similar to what happens in the neocortex. Such an approach also allows simpler algorithms called _____ |
A. | Regression |
B. | Accuracy |
C. | Modelfree |
D. | Scalable |
Answer» D. Scalable | |
74. |
______ showed better performance than other approaches, even without a context-based model |
A. | Machine learning |
B. | Deep learning |
C. | Reinforcement learning |
D. | Supervised learning |
Answer» C. Reinforcement learning | |
75. |
Suppose we fit “Lasso Regression†to a data set, which has 100 features (X1,X2…X100). Now, we rescale one of these feature by multiplying with 10 (say that feature is X1), and then refit Lasso regression with the same regularization parameter.Now, which of the following option will be correct? |
A. | It is more likely for X1 to be excluded from the model |
B. | It is more likely for X1 to be included in the model |
C. | Can’t say |
D. | None of these |
Answer» C. Can’t say | |
76. |
If Linear regression model perfectly first i.e., train error is zero, then _____________________ |
A. | Test error is also always zero |
B. | Test error is non zero |
C. | Couldn’t comment on Test error |
D. | Test error is equal to Train error |
Answer» D. Test error is equal to Train error | |
77. |
In syntax of linear model lm(formula,data,..), data refers to ______ |
A. | Matrix |
B. | Vector |
C. | Array |
D. | List |
Answer» C. Array | |
78. |
           allows exploiting the natural sparsity of data while extracting principal components. |
A. | sparsepca |
B. | kernelpca |
C. | svd |
D. | init parameter |
Answer» B. kernelpca | |
79. |
We can also compute the coefficient of linear regression with the help of an analytical method called “Normal Equationâ€. Which of the following is/are true about “Normal Equationâ€?1. We don’t have to choose the learning rate2. It becomes slow when number of features is very large3. No need to iterate |
A. | 1 and 2 |
B. | 1 and 3. |
C. | 2 and 3. |
D. | 1,2 and 3. |
Answer» E. | |
80. |
Which of the following option is true regarding “Regression†and “Correlation†?Note: y is dependent variable and x is independent variable. |
A. | The relationship is symmetric between x and y in both. |
B. | The relationship is not symmetric between x and y in both. |
C. | The relationship is not symmetric between x and y in case of correlation but in case of regression it is symmetric. |
D. | The relationship is symmetric between x and y in case of correlation but in case of regression it is not symmetric. |
Answer» E. | |
81. |
_____which can accept a NumPy RandomState generator or an integer seed. |
A. | make_blobs |
B. | random_state |
C. | test_size |
D. | training_size |
Answer» C. test_size | |
82. |
In many classification problems, the target dataset is made up of categorical labels which cannot immediately be processed by any algorithm. An encoding is needed and scikit-learn offers at least_____valid options |
A. | 1 |
B. | 2 |
C. | 3 |
D. | 4 |
Answer» C. 3 | |
83. |
______is the most drastic one and should be considered only when the dataset is quite large, the number of missing features is high, and any prediction could be risky. |
A. | Removing the whole line |
B. | Creating sub-model to predict those features |
C. | Using an automatic strategy to input them according to the other known values |
D. | All above |
Answer» B. Creating sub-model to predict those features | |
84. |
It's possible to specify if the scaling process must include both mean and standard deviation using the parameters________. |
A. | with_mean=True/False |
B. | with_std=True/False |
C. | Both A & B |
D. | None of the Mentioned |
Answer» D. None of the Mentioned | |
85. |
 Function used for linear regression in R is __________ |
A. | lm(formula, data) |
B. | lr(formula, data) |
C. | lrm(formula, data) |
D. | regression.linear(formula, data) |
Answer» B. lr(formula, data) | |
86. |
In the mathematical Equation of Linear Regression Y = β1 + β2X + ϵ, (β1, β2) refers to __________ |
A. | (X-intercept, Slope) |
B. | (Slope, X-Intercept) |
C. | (Y-Intercept, Slope) |
D. | (slope, Y-Intercept) |
Answer» D. (slope, Y-Intercept) | |
87. |
Lets say, a Linear regression model perfectly fits the training data (train error is zero). Now, Which of the following statement is true? |
A. | you will always have test error zero |
B. | you can not have test error zero |
C. | none of the above |
Answer» D. | |
88. |
which of the following step / assumption in regression modeling impacts the trade- off between under-fitting and over-fitting the most. |
A. | the polynomial degree |
B. | whether we learn the weights by matrix inversion or gradient descent |
C. | the use of a constant-term |
Answer» B. whether we learn the weights by matrix inversion or gradient descent | |
89. |
Can we calculate the skewness of variables based on mean and median? |
A. | true |
B. | false |
Answer» C. | |
90. |
Which of the following statement(s) can be true post adding a variable in a linear regression model?1. R-Squared and Adjusted R-squared both increase2. R- Squared increases and Adjusted R- |
A. | 1 and 2 |
B. | 1 and 3 |
C. | 2 and 4 |
D. | none of the above |
Answer» B. 1 and 3 | |
91. |
Conditional probability is a measure of the probability of an event given that another event has already occurred. |
A. | true |
B. | false |
Answer» B. false | |
92. |
What is/are true about kernel in SVM?1. Kernel function map low dimensional data to high dimensional space2. Its a similarity function |
A. | 1 |
B. | 2 |
C. | 1 and 2 |
D. | none of these |
Answer» D. none of these | |
93. |
Suppose you are building a SVM model on data X. The data X can be error prone which means that you should not trust any specific data point too much. Now think that you want to build a SVM model which has quadratic kernel function of polynomial degree 2 that uses Slack variable C as one of its hyper parameter.What would happen when you use very small C (C~0)? |
A. | misclassification would happen |
B. | data will be correctly classified |
C. | cant say |
D. | none of these |
Answer» B. data will be correctly classified | |
94. |
If you remove the non-red circled points from the data, the decision boundary will |
A. | true |
B. | false |
Answer» C. | |
95. |
How do you handle missing or corrupted data in a dataset? |
A. | a. drop missing rows or columns |
B. | b. replace missing values with mean/median/mode |
C. | c. assign a unique category to missing values |
D. | d. all of the above |
Answer» E. | |
96. |
The SVMs are less effective when: |
A. | the data is linearly separable |
B. | the data is clean and ready to use |
C. | the data is noisy and contains overlapping points |
Answer» D. | |
97. |
If there is only a discrete number of possible outcomes called          . |
A. | modelfree |
B. | categories |
C. | prediction |
D. | none of above |
Answer» C. prediction | |
98. |
Some people are using the term       instead of prediction only to avoid the weird idea that machine learning is a sort of modern magic. |
A. | inference |
B. | interference |
C. | accuracy |
D. | none of above |
Answer» B. interference | |
99. |
The term           can be freely used, but with the same meaning adopted in physics or system theory. |
A. | accuracy |
B. | cluster |
C. | regression |
D. | prediction |
Answer» E. | |
100. |
Common deep learning applications / problems can also be solved using |
A. | real-time visual object identification |
B. | classic approaches |
C. | automatic labeling |
D. | bio-inspired adaptive systems |
Answer» C. automatic labeling | |