MCQOPTIONS
Saved Bookmarks
This section includes 607 Mcqs, each offering curated multiple-choice questions to sharpen your Computer Science Engineering (CSE) knowledge and support exam preparation. Choose a topic below to get started.
| 451. |
Neural networks |
| A. | optimize a convex cost function |
| B. | always output values between 0 and 1 |
| C. | can be used for regression as well as classification |
| D. | all of the above |
| Answer» D. all of the above | |
| 452. |
The difference between the actual Y value and the predicted Y value found using a regression equation is called the |
| A. | slope |
| B. | residual |
| C. | outlier |
| D. | scatter plot |
| Answer» B. residual | |
| 453. |
MLE estimates are often undesirable because |
| A. | they are biased |
| B. | they have high variance |
| C. | they are not consistent estimators |
| D. | none of the above |
| Answer» C. they are not consistent estimators | |
| 454. |
A machine learning problem involves four attributes plus a class. The attributes have 3, 2, 2, and 2 possible values each. The class has 3 possible values. How many maximum possible different examples are there? |
| A. | 12 |
| B. | 24 |
| C. | 48 |
| D. | 72 |
| Answer» E. | |
| 455. |
Based on survey , it was found that the probability that person like to watch serials is 0.25 and the probability that person like to watch netflix series is 0.43. Also the probability that person like to watch serials and netflix sereis is 0.12. what is the probability that a person doesn't like to watch either? |
| A. | 0.32 |
| B. | 0.2 |
| C. | 0.44 |
| D. | 0.56 |
| Answer» D. 0.56 | |
| 456. |
Which of the following method is used for multiclass classification? |
| A. | one vs rest |
| B. | loocv |
| C. | all vs one |
| D. | one vs another |
| Answer» B. loocv | |
| 457. |
What is the Accuracy in percentage based on following confusion matrix of three class classification. Confusion Matrix C= [14 0 0] [ 1 15 0] [ 0 0 6] |
| A. | 0.75 |
| B. | 0.97 |
| C. | 0.95 |
| D. | 0.85 |
| Answer» C. 0.95 | |
| 458. |
Which of the following is true about SVM? 1. Kernel function map low dimensional data to high dimensional space. 2. It is a similarity Function |
| A. | 1 is true, 2 is false |
| B. | 1 is false, 2 is true |
| C. | 1 is true, 2 is true |
| D. | 1 is false, 2 is false |
| Answer» D. 1 is false, 2 is false | |
| 459. |
In SVM, RBF kernel with appropriate parameters to perform binary classification where the data is non-linearly seperable. In this scenario |
| A. | the decision boundry in the transformed feature space in non-linear |
| B. | the decision boundry in the transformed feature space in linear |
| C. | the decision boundry in the original feature space in not considered |
| D. | the decision boundry in the original feature space in linear |
| Answer» C. the decision boundry in the original feature space in not considered | |
| 460. |
In SVM which has quadratic kernel function of polynomial degree 2 that has slack variable C as one hyper paramenter. What would happen if we use very large value for C |
| A. | we can still classify the data correctly for given setting of hyper parameter c |
| B. | we can not classify the data correctly for given setting of hyper parameter c |
| C. | we can not classify the data at all |
| D. | data can be classified correctly without any impact of c |
| Answer» B. we can not classify the data correctly for given setting of hyper parameter c | |
| 461. |
The soft margin SVM is more preferred than the hard-margin SVM when- |
| A. | the data is linearly seperable |
| B. | the data is noisy and contains overlapping points |
| C. | the data is not noisy and linearly seperable |
| D. | the data is noisy and linearly seperable |
| Answer» C. the data is not noisy and linearly seperable | |
| 462. |
Which of the following is a categorical data? |
| A. | branch of bank |
| B. | expenditure in rupees |
| C. | prize of house |
| D. | weight of a person |
| Answer» B. expenditure in rupees | |
| 463. |
Which one of the following is suitable? 1. When the hypothsis space is richer, overfitting is more likely. 2. when the feature space is larger , overfitting is more likely. |
| A. | true, false |
| B. | false, true |
| C. | true,true |
| D. | false,false |
| Answer» D. false,false | |
| 464. |
During the treatement of cancer patients , the doctor needs to be very careful about which patients need to be given chemotherapy.Which metric should we use in order to decide the patients who should given chemotherapy? |
| A. | precision |
| B. | recall |
| C. | call |
| D. | score |
| Answer» B. recall | |
| 465. |
Which of the following is not a kernel method in SVM? |
| A. | linear kernel |
| B. | polynomial kernel |
| C. | rbf kernel |
| D. | nonlinear kernel |
| Answer» B. polynomial kernel | |
| 466. |
Which of the following are components of generalization Error? |
| A. | bias |
| B. | vaiance |
| C. | both of them |
| D. | none of them |
| Answer» D. none of them | |
| 467. |
What is the precision value for following confusion matrix of binary classification? |
| A. | 0.91 |
| B. | 0.09 |
| C. | 0.9 |
| D. | 0.95 |
| Answer» C. 0.9 | |
| 468. |
The SVMs are less effective when |
| A. | the data is linearly separable |
| B. | the data is clean and ready to use |
| C. | the data is noisy and contains overlapping points |
| D. | option 1 and option 2 |
| Answer» D. option 1 and option 2 | |
| 469. |
which among the following is the most appropriate kernel that can be used with SVM to separate the classes. |
| A. | linear kernel |
| B. | gaussian rbf kernel |
| C. | polynomial kernel |
| D. | option 1 and option 3 |
| Answer» C. polynomial kernel | |
| 470. |
Type of dataset available in Supervised Learning is |
| A. | unlabeled dataset |
| B. | labeled dataset |
| C. | csv file |
| D. | excel file |
| Answer» C. csv file | |
| 471. |
Perceptron Classifier is |
| A. | unsupervised learning algorithm |
| B. | semi-supervised learning algorithm |
| C. | supervised learning algorithm |
| D. | soft margin classifier |
| Answer» D. soft margin classifier | |
| 472. |
Imagine, you are solving a classification problems with highly imbalanced class. The majority class is observed 99% of times in the training data. Your model has 99% accuracy after taking the predictions on test data. Which of the following is true in such a case? 1. Accuracy metric is not a good idea for imbalanced class problems. 2.Accuracy metric is a good idea for imbalanced class problems. 3.Precision and recall metrics are good for imbalanced class problems. 4.Precision and recall metrics aren’t good for imbalanced class problems. |
| A. | 1 and 3 |
| B. | 1 and 4 |
| C. | 2 and 3 |
| D. | 2 and 4 |
| Answer» B. 1 and 4 | |
| 473. |
If TP=9 FP=6 FN=26 TN=70 then Error rate will be |
| A. | 45 percentage |
| B. | 99 percentage |
| C. | 28 percentage |
| D. | 20 perentage |
| Answer» D. 20 perentage | |
| 474. |
Which statement about outliers is true? |
| A. | outliers should be part of the training dataset but should not be present in the test data |
| B. | outliers should be identified and removed from a dataset |
| C. | the nature of the problem determines how outliers are used |
| D. | outliers should be part of the test dataset but should not be present in the training data |
| Answer» D. outliers should be part of the test dataset but should not be present in the training data | |
| 475. |
Let S1 and S2 be the set of support vectors and w1 and w2 be the learnt weight vectors for a linearly separable problem using hard and soft margin linear SVMs respectively. Which of the following are correct? |
| A. | s1 âš‚ s2 |
| B. | s1 may not be a subset of s2 |
| C. | w1 = w2 |
| D. | all of the above |
| Answer» C. w1 = w2 | |
| 476. |
Suppose we train a hard-margin linear SVM on n > 100 data points in R2, yielding a hyperplane with exactly 2 support vectors. If we add one more data point and retrain the classifier, what is the maximum possible number of support vectors for the new hyperplane (assuming the n + 1 points are linearly separable)? |
| A. | 2 |
| B. | 3 |
| C. | n |
| D. | n+1 |
| Answer» E. | |
| 477. |
Which of the following methods can not achieve zero training error on any linearly separable dataset? |
| A. | decision tree |
| B. | 15-nearest neighbors |
| C. | hard-margin svm |
| D. | perceptron |
| Answer» C. hard-margin svm | |
| 478. |
Wrapper methods are hyper-parameter selection methods that |
| A. | should be used whenever possible because they are computationally efficient |
| B. | should be avoided unless there are no other options because they are always prone to overfitting. |
| C. | are useful mainly when the learning machines are “black boxes†|
| D. | should be avoided altogether. |
| Answer» D. should be avoided altogether. | |
| 479. |
Suppose you are using RBF kernel in SVM with high Gamma value. What does this signify? |
| A. | the model would consider even far away points from hyperplane for modeling |
| B. | the model would consider only the points close to the hyperplane for modeling |
| C. | the model would not be affected by distance of points from hyperplane for modeling |
| D. | none of the above |
| Answer» C. the model would not be affected by distance of points from hyperplane for modeling | |
| 480. |
Suppose your model is demonstrating high variance across the different training sets. Which of the following is NOT valid way to try and reduce the variance? |
| A. | increase the amount of traning data in each traning set |
| B. | improve the optimization algorithm being used for error minimization. |
| C. | decrease the model complexity |
| D. | reduce the noise in the training data |
| Answer» C. decrease the model complexity | |
| 481. |
You trained a binary classifier model which gives very high accuracy on the training data, but much lower accuracy on validation data. Which is false. |
| A. | this is an instance of overfitting |
| B. | this is an instance of underfitting |
| C. | the training was not well regularized |
| D. | the training and testing examples are sampled from different distributions |
| Answer» C. the training was not well regularized | |
| 482. |
What is/are true about kernel in SVM? 1. Kernel function map low dimensional data to high dimensional space 2. It’s a similarity function |
| A. | 1 |
| B. | 2 |
| C. | 1 and 2 |
| D. | none of these |
| Answer» D. none of these | |
| 483. |
Suppose you have trained an SVM with linear decision boundary after training SVM, you correctly infer that your SVM model is under fitting. Which of the following is best option would you more likely to consider iterating SVM next time? |
| A. | you want to increase your data points |
| B. | you want to decrease your data points |
| C. | you will try to calculate more variables |
| D. | you will try to reduce the features |
| Answer» D. you will try to reduce the features | |
| 484. |
Which of the following can help to reduce overfitting in an SVM classifier? |
| A. | use of slack variables |
| B. | high-degree polynomial features |
| C. | normalizing the data |
| D. | setting a very low learning rate |
| Answer» B. high-degree polynomial features | |
| 485. |
How can SVM be classified? |
| A. | it is a model trained using unsupervised learning. it can be used for classification and regression. |
| B. | it is a model trained using unsupervised learning. it can be used for classification but not for regression. |
| C. | it is a model trained using supervised learning. it can be used for classification and regression. |
| D. | t is a model trained using unsupervised learning. it can be used for classification but not for regression. |
| Answer» D. t is a model trained using unsupervised learning. it can be used for classification but not for regression. | |
| 486. |
Which of the following are real world applications of the SVM? |
| A. | text and hypertext categorization |
| B. | image classification |
| C. | clustering of news articles |
| D. | all of the above |
| Answer» E. | |
| 487. |
How does the bias-variance decomposition of a ridge regression estimator compare with that of ordinary least squares regression? |
| A. | ridge has larger bias, larger variance |
| B. | ridge has smaller bias, larger variance |
| C. | ridge has larger bias, smaller variance |
| D. | ridge has smaller bias, smaller variance |
| Answer» D. ridge has smaller bias, smaller variance | |
| 488. |
The kernel trick |
| A. | can be applied to every classification algorithm |
| B. | is commonly used for dimensionality reduction |
| C. | changes ridge regression so we solve a d ?? d linear system instead of an n ?? n system, given n sample points with d features |
| D. | exploits the fact that in many learning algorithms, the weights can be written as a linear combination of input points |
| Answer» E. | |
| 489. |
Which of the following evaluation metrics can not be applied in case of logistic regression output to compare with target? |
| A. | auc-roc |
| B. | accuracy |
| C. | logloss |
| D. | mean-squared-error |
| Answer» E. | |
| 490. |
The firing rate of a neuron |
| A. | determines how strongly the dendrites of the neuron stimulate axons of neighboring neurons |
| B. | is more analogous to the output of a unit in a neural net than the output voltage of the neuron |
| C. | only changes very slowly, taking a period of several seconds to make large adjustments |
| D. | can sometimes exceed 30,000 action potentials per second |
| Answer» C. only changes very slowly, taking a period of several seconds to make large adjustments | |
| 491. |
What is the purpose of the Kernel Trick? |
| A. | to transform the data from nonlinearly separable to linearly separable |
| B. | to transform the problem from regression to classification |
| C. | to transform the problem from supervised to unsupervised learning. |
| D. | all of the above |
| Answer» B. to transform the problem from regression to classification | |
| 492. |
A perceptron adds up all the weighted inputs it receives, and if it exceeds a certain value, it outputs a 1, otherwise it just outputs a 0. |
| A. | true |
| B. | false |
| C. | sometimes – it can also output intermediate values as well |
| D. | can’t say |
| Answer» B. false | |
| 493. |
What are support vectors? |
| A. | all the examples that have a non-zero weight ??k in a svm |
| B. | the only examples necessary to compute f(x) in an svm. |
| C. | all of the above |
| D. | none of the above |
| Answer» D. none of the above | |
| 494. |
What do you mean by a hard margin? |
| A. | the svm allows very low error in classification |
| B. | the svm allows high amount of error in classification |
| C. | both 1 & 2 |
| D. | none of the above |
| Answer» B. the svm allows high amount of error in classification | |
| 495. |
Impact of high variance on the training set ? |
| A. | overfitting |
| B. | underfitting |
| C. | both underfitting & overfitting |
| D. | depents upon the dataset |
| Answer» B. underfitting | |
| 496. |
Which of the following can only be used when training data are linearlyseparable? |
| A. | linear hard-margin svm |
| B. | linear logistic regression |
| C. | linear soft margin svm |
| D. | the centroid method |
| Answer» B. linear logistic regression | |
| 497. |
In multiclass classification number of classes must be |
| A. | less than two |
| B. | equals to two |
| C. | greater than two |
| D. | option 1 and option 2 |
| Answer» D. option 1 and option 2 | |
| 498. |
Support Vector Machine is |
| A. | logical model |
| B. | proababilistic model |
| C. | geometric model |
| D. | none of the above |
| Answer» D. none of the above | |
| 499. |
Machine learning techniques differ from statistical techniques in that machine learning methods |
| A. | typically assume an underlying distribution for the data. |
| B. | are better able to deal with missing and noisy data. |
| C. | are not able to explain their behavior. |
| D. | have trouble with large-sized datasets. |
| Answer» C. are not able to explain their behavior. | |
| 500. |
This unsupervised clustering algorithm terminates when mean values computed for the current iteration of the algorithm are identical to the computed mean values for the previous iteration. |
| A. | agglomerative clustering |
| B. | conceptual clustering |
| C. | k-means clustering |
| D. | expectation maximization |
| Answer» D. expectation maximization | |