

MCQOPTIONS
Saved Bookmarks
This section includes 940 Mcqs, each offering curated multiple-choice questions to sharpen your Artificial Intelligence knowledge and support exam preparation. Choose a topic below to get started.
1. |
In a real problem, you should check to see if the SVM is separable and th |
A. | true |
B. | false |
Answer» C. | |
2. |
Linear SVMs have no hyperparameters that need to be set by cross-valid |
A. | true |
B. | false |
Answer» C. | |
3. |
If I am using all features of my dataset and I achieve 100% accura |
A. | underfitting |
B. | nothing, the m |
C. | overfitting |
Answer» D. | |
4. |
Suppose you are using RBF kernel in SVM with high Gamma valu |
A. | the model wo |
B. | uthe model wo |
C. | the model wou |
D. | none of the ab |
Answer» C. the model wou | |
5. |
Support vectors are the data points that lie closest to the decision |
A. | true |
B. | false |
Answer» B. false | |
6. |
100 people are at party. Given data gives informa |
A. | 0.4 |
B. | 0.2 |
C. | 0.6 |
D. | 0.45 |
Answer» C. 0.6 | |
7. |
For the given weather data, what is the probabilit |
A. | 0.5 |
B. | 0.26 |
C. | 0.73 |
D. | 0.6 |
Answer» E. | |
8. |
For the given weather data, Calculate probability |
A. | 0.4 |
B. | 0.64 |
C. | 0.29 |
D. | 0.75 |
Answer» C. 0.29 | |
9. |
Problem: Players will play if weather is sunny. Is t |
A. | true |
B. | false |
Answer» B. false | |
10. |
100 people are at party. Given data gives information about how many wear pink or not, and if a man or not. Imagine a pink wearing guest leaves, what is the probability of being aman |
A. | 0.4 |
B. | 0.2 |
C. | 0.6 |
D. | 0.45 |
Answer» C. 0.6 | |
11. |
For the given weather data, what is theprobability that players will play if weather is sunny |
A. | 0.5 |
B. | 0.26 |
C. | 0.73 |
D. | 0.6 |
Answer» E. | |
12. |
Linear SVMs have no hyperparameters that needto be set by cross-validation |
A. | true |
B. | false |
Answer» C. | |
13. |
Suppose you are using a Linear SVM classifier with 2 class classification problem. Now you have been given the following data in which some points are circled red that are representing support vectors.If you remove the following any one red points from the data. Does the decisionboundary will change? |
A. | yes |
B. | no |
Answer» B. no | |
14. |
If I am using all features of my dataset and I achieve 100% accuracy on my training set, but~70% on validation set, what should I look outfor? |
A. | underfitting |
B. | nothing, the model is perfect |
C. | overfitting |
Answer» D. | |
15. |
Gaussian Naïve Bayes Classifier is _ distribution |
A. | continuous |
B. | discrete |
C. | binary |
Answer» B. discrete | |
16. |
Which of the following is not supervisedlearning? |
A. | pca |
B. | decisiontree |
C. | naivebayesian |
D. | linerarregression |
Answer» B. decisiontree | |
17. |
Support vectors are the data points that lieclosest to the decision surface. |
A. | true |
B. | false |
Answer» B. false | |
18. |
We usually use feature normalization before using the Gaussian kernel in SVM. What is true about feature normalization? 1. We do feature normalization so that new feature will dominate other 2. Some times, feature normalization is not feasible in case of categorical variables3. Feature normalization always helps when we useGaussian kernel in SVM |
A. | 1 |
B. | 1 and 2 |
C. | 1 and 3 |
D. | 2 and 3 |
Answer» C. 1 and 3 | |
19. |
The minimum time complexity for training an SVM is O(n2). According to this fact, what sizesof datasets are not best suited for SVM’s? |
A. | large datasets |
B. | small datasets |
C. | medium sized datasets |
D. | size does not matter |
Answer» B. small datasets | |
20. |
For the given weather data, Calculate probabilityof not playing |
A. | 0.4 |
B. | 0.64 |
C. | 0.36 |
D. | 0.5 |
Answer» D. 0.5 | |
21. |
Multinomial Naïve Bayes Classifier is _ distribution |
A. | continuous |
B. | discrete |
C. | binary |
Answer» C. binary | |
22. |
Problem: Players will play if weather is sunny. Isthis statement is correct? |
A. | true |
B. | false |
Answer» B. false | |
23. |
We have been given a dataset with n records in which we have input attribute as x and output attribute as y. Suppose we use a linear regression method to model this data. To test our linear regressor, we split the data in training set and test set randomly. What do you expect will happen with bias and variance as you increase the size oftraining data? |
A. | bias increases and variance increases |
B. | bias decreases and variance increases |
C. | bias decreases and variance decreases |
D. | bias increases and variance decreases |
Answer» E. | |
24. |
We have been given a dataset with n records in which we have input attribute as x and output attribute as y. Suppose we use a linear regression method to model this data. To test our linear regressor, we split the data in training set and test set randomly. Now we increase the training set size gradually. As the training set size increases, what do you expect will happen with the meantraining error? |
A. | increase |
B. | decrease |
C. | remain constant |
D. | can’t say |
Answer» E. | |
25. |
Suppose that we have N independent variables (X1,X2… Xn) and dependent variable is Y. Now Imagine that you are applying linear regression by fitting the best fit line using least square error on this data. You found that correlation coefficient for one of it’s variable(Say X1) with Y is -0.95.Which of the following is true for X1? |
A. | relation between the x1 and y is weak |
B. | relation between the x1 and y is strong |
C. | relation between the x1 and y is neutral |
D. | correlation can’t judge the relationship |
Answer» C. relation between the x1 and y is neutral | |
26. |
In the mathematical Equation of Linear Regression Y = β1 + β2X + ϵ, (β1, β2) refers to |
A. | (x-intercept, slope) |
B. | (slope, x- intercept) |
C. | (y-intercept, slope) |
D. | (slope, y- intercept) |
Answer» D. (slope, y- intercept) | |
27. |
Which of the following method(s) does not haveclosed form solution for its coefficients? |
A. | ridgeregression |
B. | lasso |
C. | both ridgeand lasso |
D. | none of both |
Answer» C. both ridgeand lasso | |
28. |
What is/are true about ridge regression?1. When lambda is 0, model works like linear regression model2. When lambda is 0, model doesn’t work like linear regression model3. When lambda goes to infinity, we get very, very small coefficients approaching 04. When lambda goes to infinity, we get very, very large coefficients approachinginfinity |
A. | 1 and 3 |
B. | 1 and 4 |
C. | 2 and 3 |
D. | 2 and 4 |
Answer» B. 1 and 4 | |
29. |
Suppose you have fitted a complex regression model on a dataset. Now, you are using Ridge regression with tuning parameter lambda to reduce its complexity. Choose the option(s) below which describes relationship of bias andvariance with lambda. |
A. | in case of very large lambda; bias is low, variance islow |
B. | in case of very large lambda; bias is low, variance ishigh |
C. | in case of very large lambda; bias is high, variance islow |
D. | in case of very large lambda; bias is high, variance ishigh |
Answer» D. in case of very large lambda; bias is high, variance ishigh | |
30. |
Which of the following selects the best K high-scorefeatures. |
A. | selectpercentile |
B. | featurehasher |
C. | selectkbest |
D. | all above |
Answer» D. all above | |
31. |
It's possible to specify if the scaling process must include both mean and standard deviation using theparameters . |
A. | with_mean=tru e/false |
B. | with_std=true/ false |
C. | both a & b |
D. | none of the mentioned |
Answer» D. none of the mentioned | |
32. |
In many classification problems, the target dataset is made up of categorical labels which cannot immediately be processed by any algorithm. An encoding is needed and scikit-learn offers atleast valid options |
A. | 1 |
B. | 2 |
C. | 3 |
D. | 4 |
Answer» C. 3 | |
33. |
which can accept a NumPy RandomStategenerator or an integer seed. |
A. | make_blobs |
B. | random_state |
C. | test_size |
D. | training_size |
Answer» C. test_size | |
34. |
if there is only a discrete number of possible outcomes (called categories),the process becomes a . |
A. | regression |
B. | classification. |
C. | modelfree |
D. | categories |
Answer» C. modelfree | |
35. |
During the last few years, many algorithms have been applied to deepneural networks to learn the best policy for playing Atari video games and to teach an agent how to associate the right action with an input representingthe state. |
A. | logical |
B. | classical |
C. | classification |
D. | none of above |
Answer» E. | |
36. |
Reinforcement learning is particularly efficient when . |
A. | the environment is not completely deterministic |
B. | it\s often very dynamic |
C. | it\s impossible to have a precise error measure |
D. | all above |
Answer» E. | |
37. |
what is the function of ‘Supervised Learning’? |
A. | classifications, predict time series, annotate strings |
B. | speech recognition, regression |
C. | both a & b |
D. | none of above |
Answer» D. none of above | |
38. |
showed better performance than other approaches, even without a context-based model |
A. | machine learning |
B. | deep learning |
C. | reinforcement learning |
D. | supervised learning |
Answer» C. reinforcement learning | |
39. |
there's a growing interest in pattern recognition and associative memories whose structure and functioningare similar to what happens in the neocortex. Such an |
A. | regression |
B. | accuracy |
C. | modelfree |
D. | scalable |
Answer» D. scalable | |
40. |
Techniques involve the usage of both labeled and unlabeled data is called . |
A. | supervised |
B. | semi- supervised |
C. | unsupervised |
D. | none of the above |
Answer» C. unsupervised | |
41. |
When it is necessary to allow the model to develop a generalization ability and avoid a common problemcalled . |
A. | overfitting |
B. | overlearning |
C. | classification |
D. | regression |
Answer» B. overlearning | |
42. |
In the last decade, many researchers started trainingbigger and bigger models, built with several different layers that's why this approach is called . |
A. | deep learning |
B. | machine learning |
C. | reinforcement learning |
D. | unsupervised learning |
Answer» B. machine learning | |
43. |
In reinforcement learning, this feedback is usually called as . |
A. | overfitting |
B. | overlearning |
C. | reward |
D. | none of above |
Answer» D. none of above | |
44. |
can be adopted when it's necessary to categorize a large amount of data with a fewcomplete examples or when there's the need to |
A. | supervised |
B. | semi- supervised |
C. | reinforcement |
D. | clusters |
Answer» C. reinforcement | |
45. |
Linear SVMs have no hyperparameters |
A. | true |
B. | false |
Answer» C. | |
46. |
100 people are at party. Given data gives information about how many wear pink or not, and if a man or not. Imagine a pink wearing guest leaves, what is the probability of being a man |
A. | 0.4 |
B. | 0.2 |
C. | 0.6 |
D. | 0.45 |
Answer» C. 0.6 | |
47. |
For the given weather data, what is the probability that players will play if weather is sunny |
A. | 0.5 |
B. | 0.26 |
C. | 0.73 |
D. | 0.6 |
Answer» E. | |
48. |
Linear SVMs have no hyperparameters that need to be set by cross-validation |
A. | true |
B. | false |
Answer» C. | |
49. |
Suppose you are using a Linear SVM classifier with 2 class classification problem. Now you have been given the following data in which some points are circled red that are representing support vectors.If you remove the following any one red points from the data. Does the decision boundary will change? |
A. | yes |
B. | no |
Answer» B. no | |
50. |
If I am using all features of my dataset and I achieve 100% accuracy on my training set, but ~70% on validation set, what should I look out for? |
A. | underfitting |
B. | nothing, the model is perfect |
C. | overfitting |
Answer» D. | |