MCQOPTIONS
Saved Bookmarks
This section includes 607 Mcqs, each offering curated multiple-choice questions to sharpen your Computer Science Engineering (CSE) knowledge and support exam preparation. Choose a topic below to get started.
| 551. |
Which learning Requires Self Assessment to identify patterns within data? |
| A. | unsupervised learning |
| B. | supervised learning |
| C. | semisupervised learning |
| D. | reinforced learning |
| Answer» B. supervised learning | |
| 552. |
In the example of predicting number of babies based on stork's population ,Number of babies is |
| A. | outcome |
| B. | feature |
| C. | observation |
| D. | attribute |
| Answer» B. feature | |
| 553. |
Some telecommunication company wants to segment their customers into distinct groups ,this is an example of |
| A. | supervised learning |
| B. | reinforcement learning |
| C. | unsupervised learning |
| D. | data extraction |
| Answer» D. data extraction | |
| 554. |
A person trained to interact with a human expert in order to capture their knowledge. |
| A. | knowledge programmer |
| B. | knowledge developer r |
| C. | knowledge engineer |
| D. | knowledge extractor |
| Answer» E. | |
| 555. |
Database query is used to uncover this type of knowledge. |
| A. | deep |
| B. | hidden |
| C. | shallow |
| D. | multidimensional |
| Answer» E. | |
| 556. |
Like the probabilistic view, the ________ view allows us to associate a probability of membership with each classification. |
| A. | exampler |
| B. | deductive |
| C. | classical |
| D. | inductive |
| Answer» E. | |
| 557. |
What characterize is hyperplance in geometrical model of machine learning? |
| A. | a plane with 1 dimensional fewer than number of input attributes |
| B. | a plane with 2 dimensional fewer than number of input attributes |
| C. | a plane with 1 dimensional more than number of input attributes |
| D. | a plane with 2 dimensional more than number of input attributes |
| Answer» C. a plane with 1 dimensional more than number of input attributes | |
| 558. |
Supervised learning and unsupervised clustering both require which is correct according to the statement. |
| A. | output attribute. |
| B. | hidden attribute. |
| C. | input attribute. |
| D. | categorical attribute |
| Answer» D. categorical attribute | |
| 559. |
Which of the following techniques would perform better for reducing dimensions of a data set? |
| A. | removing columns which have too many missing values |
| B. | removing columns which have high variance in data |
| C. | removing columns with dissimilar data trends |
| D. | none of these |
| Answer» B. removing columns which have high variance in data | |
| 560. |
Dimensionality reduction algorithms are one of the possible ways to reduce the computation time required to build a model. |
| A. | true |
| B. | false |
| Answer» B. false | |
| 561. |
PCA is |
| A. | forward feature selection |
| B. | backword feature selection |
| C. | feature extraction |
| D. | all of the above |
| Answer» D. all of the above | |
| 562. |
A feature F1 can take certain value: A, B, C, D, E, & F and represents grade of students from a college. Here feature type is |
| A. | nominal |
| B. | ordinal |
| C. | categorical |
| D. | boolean |
| Answer» C. categorical | |
| 563. |
The output of training process in machine learning is |
| A. | machine learning model |
| B. | machine learning algorithm |
| C. | null |
| D. | accuracy |
| Answer» B. machine learning algorithm | |
| 564. |
Following is powerful distance metrics used by Geometric model |
| A. | euclidean distance |
| B. | manhattan distance |
| C. | both a and b?? |
| D. | square distance |
| Answer» D. square distance | |
| 565. |
Type of matrix decomposition model is |
| A. | descriptive model |
| B. | predictive model |
| C. | logical model |
| D. | none of the above |
| Answer» B. predictive model | |
| 566. |
Following are the types of supervised learning |
| A. | classification |
| B. | regression |
| C. | subgroup discovery |
| D. | all of the above |
| Answer» E. | |
| 567. |
Which of the following is a good test dataset characteristic? |
| A. | large enough to yield meaningful results |
| B. | is representative of the dataset as a whole |
| C. | both a and b |
| D. | none of the above |
| Answer» D. none of the above | |
| 568. |
You are given reviews of few netflix series marked as positive, negative and neutral. Classifying reviews of a new netflix series is an example of |
| A. | supervised learning |
| B. | unsupervised learning |
| C. | semisupervised learning |
| D. | reinforcement learning |
| Answer» B. unsupervised learning | |
| 569. |
Dimensionality Reduction Algorithms are one of the possible ways to reduce the computation time required to build a model |
| A. | true |
| B. | false |
| Answer» B. false | |
| 570. |
Of the Following Examples, Which would you address using an supervised learning Algorithm? |
| A. | given email labeled as spam or not spam, learn a spam filter |
| B. | given a set of news articles found on the web, group them into set of articles about the same story. |
| C. | given a database of customer data, automatically discover market segments and group customers into different market segments. |
| D. | find the patterns in market basket analysis |
| Answer» B. given a set of news articles found on the web, group them into set of articles about the same story. | |
| 571. |
The problem of finding hidden structure in unlabeled data is called… |
| A. | supervised learning |
| B. | unsupervised learning |
| C. | reinforcement learning |
| D. | none of the above |
| Answer» C. reinforcement learning | |
| 572. |
Data used to build a data mining model. |
| A. | training data |
| B. | validation data |
| C. | test data |
| D. | hidden data |
| Answer» B. validation data | |
| 573. |
What does dimensionality reduction reduce? |
| A. | stochastics |
| B. | collinerity |
| C. | performance |
| D. | entropy |
| Answer» C. performance | |
| 574. |
What characterize unlabeled examples in machine learning |
| A. | there is no prior knowledge |
| B. | there is no confusing knowledge |
| C. | there is prior knowledge |
| D. | there is plenty of confusing knowledge |
| Answer» E. | |
| 575. |
Which of the following is the best machine learning method? |
| A. | scalable |
| B. | accuracy |
| C. | fast |
| D. | all of the above |
| Answer» E. | |
| 576. |
PCA can be used for projecting and visualizing data in lower dimensions. |
| A. | true |
| B. | false |
| Answer» B. false | |
| 577. |
In PCA the number of input dimensiona are equal to principal components |
| A. | true |
| B. | false |
| Answer» B. false | |
| 578. |
In following type of feature selection method we start with empty feature set |
| A. | forward feature selection |
| B. | backword feature selection |
| C. | both a and b?? |
| D. | none of the above |
| Answer» B. backword feature selection | |
| 579. |
In what type of learning labelled training data is used |
| A. | unsupervised learning |
| B. | supervised learning |
| C. | reinforcement learning |
| D. | active learning |
| Answer» C. reinforcement learning | |
| 580. |
If machine learning model output involves target variable then that model is called as |
| A. | descriptive model |
| B. | predictive model |
| C. | reinforcement learning |
| D. | all of the above |
| Answer» C. reinforcement learning | |
| 581. |
Application of machine learning methods to large databases is called |
| A. | data mining. |
| B. | artificial intelligence |
| C. | big data computing |
| D. | internet of things |
| Answer» B. artificial intelligence | |
| 582. |
We usually use feature normalization before using the Gaussian k |
| A. | e 1 |
| B. | 1 and 2 |
| C. | 1 and 3 |
| D. | 2 and 3 |
| Answer» C. 1 and 3 | |
| 583. |
The cost parameter in the SVM means: |
| A. | the number of cross- validations to be made |
| B. | the kernel to be used |
| C. | the tradeoff between misclassificati on and simplicity of the model |
| D. | none of the above |
| Answer» D. none of the above | |
| 584. |
Which of the following is true about Naive Bayes ? |
| A. | a. assumes that all the features in a dataset are equally important |
| B. | b. assumes that all the features in a dataset are independent |
| C. | c. both a and b |
| D. | d. none of the above option |
| Answer» D. d. none of the above option | |
| 585. |
If I am using all features of my dataset and I achieve 100% accuracy on my training set, but ~70% on validation set, what should I look out for? |
| A. | underfitting |
| B. | nothing, the model is perfect |
| C. | overfitting |
| Answer» D. | |
| 586. |
Gaussian Naïve Bayes Classifier is     _ distribution |
| A. | continuous |
| B. | discrete |
| C. | binary |
| Answer» B. discrete | |
| 587. |
Which of the following is not supervised learning? |
| A. | pca |
| B. | decision tree |
| C. | naive bayesian |
| D. | linerar regression |
| Answer» B. decision tree | |
| 588. |
We usually use feature normalization before using the Gaussian kernel in SVM. What is true about feature normalization? 1. We do feature normalization so that new feature will dominate other 2. Some times, feature normalization is not feasible in case of categorical variables3. Feature normalization always helps when we use Gaussian kernel in SVM |
| A. | 1 |
| B. | 1 and 2 |
| C. | 1 and 3 |
| D. | 2 and 3 |
| Answer» C. 1 and 3 | |
| 589. |
The minimum time complexity for training an SVM is O(n2). According to this fact, what sizes of datasets are not best suited for SVM’s? |
| A. | large datasets |
| B. | small datasets |
| C. | medium sized datasets |
| D. | size does not matter |
| Answer» B. small datasets | |
| 590. |
Problem: Players will play if weather is sunny. Is this statement is correct? |
| A. | true |
| B. | false |
| Answer» B. false | |
| 591. |
We have been given a dataset with n records in which we have input attribute as x and output attribute as y. Suppose we use a linear regression method to model this data. To test our linear regressor, we split the data in training set and test set randomly. What do you expect will happen with bias and variance as you increase the size of training data? |
| A. | bias increases and variance increases |
| B. | bias decreases and variance increases |
| C. | bias decreases and variance decreases |
| D. | bias increases and variance decreases |
| Answer» E. | |
| 592. |
Multinomial Naïve Bayes Classifier is     _ distribution |
| A. | continuous |
| B. | discrete |
| C. | binary |
| Answer» C. binary | |
| 593. |
We have been given a dataset with n records in which we have input attribute as x and output attribute as y. Suppose we use a linear regression method to model this data. To test our linear regressor, we split the data in training set and test set randomly. Now we increase the training set size gradually. As the training set size increases, what do you expect will happen with the mean training error? |
| A. | increase |
| B. | decrease |
| C. | remain constant |
| D. | can’t say |
| Answer» E. | |
| 594. |
What is/are true about ridge regression?1. When lambda is 0, model works like linear regression model2. When lambda is 0, model doesn’t work like linear regression model3. When lambda goes to infinity, we get very, very small coefficients approaching 04. When lambda goes to infinity, we get very, very large coefficients approaching infinity |
| A. | 1 and 3 |
| B. | 1 and 4 |
| C. | 2 and 3 |
| D. | 2 and 4 |
| Answer» B. 1 and 4 | |
| 595. |
Which of the following selects the best K high-score features. |
| A. | selectpercentil e |
| B. | featurehasher |
| C. | selectkbest |
| D. | all above |
| Answer» D. all above | |
| 596. |
It's possible to specify if the scaling process must include both mean and standard deviation using the parameters . |
| A. | with_mean=tru e/false |
| B. | with_std=true/ false |
| C. | both a & b |
| D. | none of the mentioned |
| Answer» D. none of the mentioned | |
| 597. |
In many classification problems, the target dataset is made up of categorical labels which cannot immediately be processed by any algorithm. An encoding is needed and scikit-learn offers at least valid options |
| A. | 1 |
| B. | 2 |
| C. | 3 |
| D. | 4 |
| Answer» C. 3 | |
| 598. |
          which can accept a NumPy RandomState generator or an integer seed. |
| A. | make_blobs |
| B. | random_state |
| C. | test_size |
| D. | training_size |
| Answer» C. test_size | |
| 599. |
scikit-learn also provides functions for creating dummy datasets from scratch: |
| A. | make_classifica tion() |
| B. | make_regressio n() |
| C. | make_blobs() |
| D. | all above |
| Answer» E. | |
| 600. |
Let’s say, you are working with categorical feature(s) and you have not looked at the distribution of the categorical variable in the test data. You want to apply one hot encoding (OHE) on the categorical feature(s). What challenges you may face if you have applied OHE on a categorical variable of train dataset? |
| A. | all categories of categorical variable are not present in the test dataset. |
| B. | frequency distribution of categories is different in train as compared to the test dataset. |
| C. | train and test always have same distribution. |
| D. | both a and b |
| Answer» E. | |