

MCQOPTIONS
Saved Bookmarks
This section includes 607 Mcqs, each offering curated multiple-choice questions to sharpen your Computer Science Engineering (CSE) knowledge and support exam preparation. Choose a topic below to get started.
1. |
scikit-learn offers the class           , which is responsible for filling the holes using a strategy based on the mean, median, or frequency |
A. | labelencoder |
B. | labelbinarizer |
C. | dictvectorizer |
D. | imputer |
Answer» E. | |
2. |
scikit-learn also provides a class for per- sample normalization, |
A. | normalizer |
B. | imputer |
C. | classifier |
D. | all above |
Answer» B. imputer | |
3. |
           dataset with many features contains information proportional to the independence of all features and their variance. |
A. | normalized |
B. | unnormalized |
C. | both a & b |
D. | none of the mentioned |
Answer» C. both a & b | |
4. |
In order to assess how much information is brought by each component, and the correlation among them, a useful tool is the         . |
A. | concuttent matrix |
B. | convergance matrix |
C. | supportive matrix |
D. | covariance matrix |
Answer» E. | |
5. |
In reinforcement learning if feedback is negative one it is defined as____. |
A. | Penalty |
B. | Overlearning |
C. | Reward |
D. | None of above |
Answer» B. Overlearning | |
6. |
What is ‘Training set’? |
A. | Training set is used to test the accuracy of the hypotheses generated by the learner. |
B. | A set of data is used to discover the potentially predictive relationship. |
C. | Both A & B |
D. | None of above |
Answer» C. Both A & B | |
7. |
Common deep learning applications include____ |
A. | Image classification, Real-time visual tracking |
B. | Autonomous car driving, Logistic optimization |
C. | Bioinformatics, Speech recognition |
D. | All above |
Answer» E. | |
8. |
According to____ , it’s a key success factor for the survival and evolution of all species. |
A. | Claude Shannon\s theory |
B. | Gini Index |
C. | Darwin’s theory |
D. | None of above |
Answer» D. None of above | |
9. |
if there is only a discrete number of possible outcomes (called categories), the process becomes a______. |
A. | Regression |
B. | Classification. |
C. | Modelfree |
D. | Categories |
Answer» C. Modelfree | |
10. |
Reinforcement learning is particularly efficient when______________. |
A. | the environment is not completely deterministic |
B. | it\s often very dynamic |
C. | it\s impossible to have a precise error measure |
D. | All above |
Answer» E. | |
11. |
During the last few years, many ______ algorithms have been applied to deep neural networks to learn the best policy for playing Atari video games and to teach an agent how to associate the right action with an input representing the state. |
A. | Logical |
B. | Classical |
C. | Classification |
D. | None of above |
Answer» E. | |
12. |
________is much more difficult because it's necessary to determine a supervised strategy to train a model for each feature and, finally, to predict their value |
A. | Removing the whole line |
B. | Creating sub-model to predict those features |
C. | Using an automatic strategy to input them according to the other known values |
D. | All above |
Answer» C. Using an automatic strategy to input them according to the other known values | |
13. |
There are also many univariate methods that can be used in order to select the best features according to specific criteria based on________. |
A. | F-tests and p-values |
B. | chi-square |
C. | ANOVA |
D. | All above |
Answer» B. chi-square | |
14. |
How it's possible to use a different placeholder through the parameter_______. |
A. | regression |
B. | classification |
C. | random_state |
D. | missing_values |
Answer» E. | |
15. |
If you need a more powerful scaling feature, with a superior control on outliers and the possibility to select a quantile range, there's also the class________. |
A. | RobustScaler |
B. | DictVectorizer |
C. | LabelBinarizer |
D. | FeatureHasher |
Answer» B. DictVectorizer | |
16. |
scikit-learn also provides a class for per-sample normalization, Normalizer. It can apply________to each element of a dataset |
A. | max, l0 and l1 norms |
B. | max, l1 and l2 norms |
C. | max, l2 and l3 norms |
D. | max, l3 and l4 norms |
Answer» C. max, l2 and l3 norms | |
17. |
________performs a PCA with non-linearly separable data sets. |
A. | SparsePCA |
B. | KernelPCA |
C. | SVD |
D. | None of the Mentioned |
Answer» C. SVD | |
18. |
The          parameter can assume different values which determine how the data matrix is initially processed. |
A. | run |
B. | start |
C. | init |
D. | stop |
Answer» D. stop | |
19. |
The parameter______ allows specifying the percentage of elements to put into the test/training set |
A. | test_size |
B. | training_size |
C. | All above |
D. | None of these |
Answer» D. None of these | |
20. |
In many classification problems, the target ______ is made up of categorical labels which cannot immediately be processed by any algorithm. |
A. | random_state |
B. | dataset |
C. | test_size |
D. | All above |
Answer» C. test_size | |
21. |
_______adopts a dictionary-oriented approach, associating to each category label a progressive integer number. |
A. | LabelEncoder class |
B. | LabelBinarizer class |
C. | DictVectorizer |
D. | FeatureHasher |
Answer» B. LabelBinarizer class | |
22. |
Features being classified is independent of each other in Naïve Bayes Classifier |
A. | False |
B. | true |
Answer» C. | |
23. |
Features being classified is __________ of each other in Naïve Bayes Classifier |
A. | Independent |
B. | Dependent |
C. | Partial Dependent |
D. | None |
Answer» B. Dependent | |
24. |
Naive Bayes classifiers are a collection ------------------of algorithms |
A. | Classification |
B. | Clustering |
C. | Regression |
D. | All |
Answer» B. Clustering | |
25. |
Naive Bayes classifiers is _______________ Learning |
A. | Supervised |
B. | Unsupervised |
C. | Both |
D. | None |
Answer» B. Unsupervised | |
26. |
Bernoulli Naïve Bayes Classifier is ___________distribution |
A. | Continuous |
B. | Discrete |
C. | Binary |
Answer» D. | |
27. |
Multinomial Naïve Bayes Classifier is ___________distribution |
A. | Continuous |
B. | Discrete |
C. | Binary |
Answer» C. Binary | |
28. |
Bayes’ theorem describes the probability of an event, based on prior knowledge of conditions that might be related to the event. |
A. | True |
B. | false |
Answer» B. false | |
29. |
Gaussian Naïve Bayes Classifier is ___________distribution |
A. | Continuous |
B. | Discrete |
C. | Binary |
Answer» B. Discrete | |
30. |
Gaussian distribution when plotted, gives a bell shaped curve which is symmetric about the _______ of the feature values. |
A. | Mean |
B. | Variance |
C. | Discrete |
D. | Random |
Answer» B. Variance | |
31. |
SVMs directly give us the posterior probabilities P(y = 1jx) and P(y = ô€€€1jx) |
A. | True |
B. | false |
Answer» C. | |
32. |
SVM is a ------------------ algorithm |
A. | Classification |
B. | Clustering |
C. | Regression |
D. | All |
Answer» B. Clustering | |
33. |
The linear SVM classifier works by drawing a straight line between two classes |
A. | True |
B. | false |
Answer» B. false | |
34. |
SVM is a ------------------ learning |
A. | Supervised |
B. | Unsupervised |
C. | Both |
D. | None |
Answer» B. Unsupervised | |
35. |
Even if there are no actual supervisors ________ learning is also based on feedback provided by the environment |
A. | Supervised |
B. | Reinforcement |
C. | Unsupervised |
D. | None of the above |
Answer» C. Unsupervised | |
36. |
When it is necessary to allow the model to develop a generalization ability and avoid a common problem called______. |
A. | Overfitting |
B. | Overlearning |
C. | Classification |
D. | Regression |
Answer» B. Overlearning | |
37. |
Techniques involve the usage of both labeled and unlabeled data is called___. |
A. | Supervised |
B. | Semi-supervised |
C. | Unsupervised |
D. | None of the above |
Answer» C. Unsupervised | |
38. |
A supervised scenario is characterized by the concept of a _____. |
A. | Programmer |
B. | Teacher |
C. | Author |
D. | Farmer |
Answer» C. Author | |
39. |
overlearning causes due to an excessive ______. |
A. | Capacity |
B. | Regression |
C. | Reinforcement |
D. | Accuracy |
Answer» B. Regression | |
40. |
_____ provides some built-in datasets that can be used for testing purposes. |
A. | scikit-learn |
B. | classification |
C. | regression |
D. | None of the above |
Answer» B. classification | |
41. |
While using _____ all labels are turned into sequential numbers. |
A. | LabelEncoder class |
B. | LabelBinarizer class |
C. | DictVectorizer |
D. | FeatureHasher |
Answer» B. LabelBinarizer class | |
42. |
scikit-learn offers the class______, which is responsible for filling the holes using a strategy based on the mean, median, or frequency |
A. | LabelEncoder |
B. | LabelBinarizer |
C. | DictVectorizer |
D. | Imputer |
Answer» E. | |
43. |
_______produce sparse matrices of real numbers that can be fed into any machine learning model. |
A. | DictVectorizer |
B. | FeatureHasher |
C. | Both A & B |
D. | None of the Mentioned |
Answer» D. None of the Mentioned | |
44. |
Which of the following scale data by removing elements that don't belong to a given range or by considering a maximum absolute value. |
A. | MinMaxScaler |
B. | MaxAbsScaler |
C. | Both A & B |
D. | None of the Mentioned |
Answer» D. None of the Mentioned | |
45. |
scikit-learn also provides a class for per-sample normalization,_____ |
A. | Normalizer |
B. | Imputer |
C. | Classifier |
D. | All above |
Answer» B. Imputer | |
46. |
______dataset with many features contains information proportional to the independence of all features and their variance. |
A. | normalized |
B. | unnormalized |
C. | Both A & B |
D. | None of the Mentioned |
Answer» C. Both A & B | |
47. |
In order to assess how much information is brought by each component, and the correlation among them, a useful tool is the_____. |
A. | Concuttent matrix |
B. | Convergance matrix |
C. | Supportive matrix |
D. | Covariance matrix |
Answer» E. | |
48. |
The_____ parameter can assume different values which determine how the data matrix is initially processed. |
A. | run |
B. | start |
C. | init |
D. | stop |
Answer» D. stop | |
49. |
______allows exploiting the natural sparsity of data while extracting principal components. |
A. | SparsePCA |
B. | KernelPCA |
C. | SVD |
D. | init parameter |
Answer» B. KernelPCA | |
50. |
which of the following step / assumption in regression modeling impacts the trade-off between under-fitting and over-fitting the most. |
A. | The polynomial degree |
B. | Whether we learn the weights by matrix inversion or gradient descent |
C. | The use of a constant-term |
Answer» B. Whether we learn the weights by matrix inversion or gradient descent | |