Explore topic-wise MCQs in Computer Science Engineering (CSE).

This section includes 607 Mcqs, each offering curated multiple-choice questions to sharpen your Computer Science Engineering (CSE) knowledge and support exam preparation. Choose a topic below to get started.

201.

For the given weather data, what is the probabilit

A. 0.5
B. 0.26
C. 0.73
D. 0.6
Answer» E.
202.

100 people are at party. Given data gives informa

A. true
B. false
Answer» B. false
203.

What do you mean by generalization error in terms of the SVM?

A. how far the hy
B. how accuratel
C. the threshold amount of error i
Answer» C. the threshold amount of error i
204.

Support vectors are the data points that lie closest to the decision

A. true
B. false
Answer» B. false
205.

The SVM’s are less effective when:

A. the data is line
B. the data is cl
C. the data is noisy and contains
Answer» D.
206.

Suppose you are using RBF kernel in SVM with high Gamma valu

A. the model wo
B. uthe model wo
C. the model wou
D. none of the ab
Answer» C. the model wou
207.

If I am using all features of my dataset and I achieve 100% accura

A. underfitting
B. nothing, the m
C. overfitting
Answer» D.
208.

Suppose you have trained an SVM with linear decision boundary after training SVM, you correctly infer that your SVM model is under fitting.Which of the following option would you more likely to consider iterating SVM next time?

A. you want to in
B. you want to d
C. you will try to c
D. you will try to r
Answer» D. you will try to r
209.

Linear SVMs have no hyperparameters that need to be set by cross-valid

A. true
B. false
Answer» C.
210.

In a real problem, you should check to see if the SVM is separable and th

A. true
B. false
Answer» C.
211.

             produce sparse matrices of real numbers that can be fed into any machine learning model.

A. dictvectorizer
B. featurehasher
C. both a & b
D. none of the mentioned
Answer» D. none of the mentioned
212.

While using           all labels are turned into sequential numbers.

A. labelencoder class
B. labelbinarizer class
C. dictvectorizer
D. featurehasher
Answer» B. labelbinarizer class
213.

          provides some built-in datasets that can be used for testing purposes.

A. scikit-learn
B. classification
C. regression
D. none of the above
Answer» B. classification
214.

Which of the following are several models

A. regression
B. classification
C. none of the above
Answer» D.
215.

While using feature selection on the data, is the number of features decreases.

A. no
B. yes
Answer» C.
216.

Can we extract knowledge without apply feature selection

A. yes
B. no
Answer» B. no
217.

Which of the following model model include a backwards elimination feature selection routine?

A. mcv
B. mars
C. mcrs
D. all above
Answer» C. mcrs
218.

Which of the following is an example of a deterministic algorithm?

A. pca
B. k-means
C. none of the above
Answer» B. k-means
219.

overlearning causes due to an excessive            .

A. capacity
B. regression
C. reinforcement
D. accuracy
Answer» B. regression
220.

A supervised scenario is characterized by the concept of a          .

A. programmer
B. teacher
C. author
D. farmer
Answer» C. author
221.

In reinforcement learning if feedback is negative one it is defined as       .

A. penalty
B. overlearning
C. reward
D. none of above
Answer» B. overlearning
222.

Techniques involve the usage of both labeled and unlabeled data is called     .

A. supervised
B. semi-supervised
C. unsupervised
D. none of the above
Answer» C. unsupervised
223.

When it is necessary to allow the model to develop a generalization ability and avoid a common problem called           .

A. overfitting
B. overlearning
C. classification
D. regression
Answer» B. overlearning
224.

What does learning exactly mean?

A. robots are programed so that they can perform the task based on data they gather from sensors.
B. a set of data is used to discover the potentially predictive relationship.
C. learning is the ability to change according to external stimuli and remembering most of all previous experiences.
D. it is a set of data is used to discover the potentially predictive relationship.
Answer» D. it is a set of data is used to discover the potentially predictive relationship.
225.

Even if there are no actual supervisors                learning is also based on feedback provided by the environment

A. supervised
B. reinforcement
C. unsupervised
D. none of the above
Answer» C. unsupervised
226.

Which are two techniques of Machine Learning ?

A. genetic programming and inductive learning
B. speech recognition and regression
C. both a & b
D. none of the mentioned
Answer» B. speech recognition and regression
227.

What is Model Selection in Machine Learning?

A. the process of selecting models among different mathematical models, which are used to describe the same data set
B. when a statistical model describes random error or noise instead of underlying relationship
C. find interesting directions in data and find novel observations/ database cleaning
D. all above
Answer» B. when a statistical model describes random error or noise instead of underlying relationship
228.

Which of the following is not Machine Learning?

A. artificial intelligence
B. rule based inference
C. both a & b
D. none of the mentioned
Answer» C. both a & b
229.

What is the standard approach to supervised learning?

A. split the set of example into the training set and the test
B. group the set of example into the training set and the test
C. a set of observed instances tries to induce a general rule
D. learns programs from data
Answer» B. group the set of example into the training set and the test
230.

What are the different Algorithm techniques in Machine Learning?

A. supervised learning and semi-supervised learning
B. unsupervised learning and transduction
C. both a & b
D. none of the mentioned
Answer» D. none of the mentioned
231.

Which of the following is characteristic of best machine learning method ?

A. fast
B. accuracy
C. scalable
D. all above
Answer» E.
232.

Which of the following function provides unsupervised prediction ?

A. cl_forecastb
B. cl_nowcastc
C. cl_precastd
D. none of the mentioned
Answer» E.
233.

The linearSVMclassifier works by drawing a straight line between two classes

A. true
B. false
Answer» B. false
234.

SVM is a learning

A. supervised
B. unsupervised
C. both
D. none
Answer» B. unsupervised
235.

SVM is a algorithm

A. classification
B. clustering
C. regression
D. all
Answer» B. clustering
236.

Solving a non linear separation problem with a hard margin Kernelized SVM (Gaussian RBF Kernel) might lead to overfitting

A. true
B. false
Answer» B. false
237.

Any linear combination of the components of a multivariate Gaussian is a univariate Gaussian.

A. true
B. false
Answer» B. false
238.

SVMs directly give us the posterior probabilities P(y = 1jx) and P(y = ??1jx)

A. true
B. false
Answer» C.
239.

Gaussian distribution when plotted, gives a bell shaped curve which is symmetric about the               of the feature values.

A. mean
B. variance
C. discrete
D. random
Answer» B. variance
240.

Binarize parameter in BernoulliNB scikit sets threshold for binarizing of sample features.

A. true
B. false
Answer» B. false
241.

Gaussian Nave Bayes Classifier is                     distribution

A. continuous
B. discrete
C. binary
Answer» B. discrete
242.

Multinomial Nave Bayes Classifier is                     distribution

A. continuous
B. discrete
C. binary
Answer» C. binary
243.

Bernoulli Nave Bayes Classifier is                     distribution

A. continuous
B. discrete
C. binary
Answer» D.
244.

Bayes theorem describes the probability of an event, based on prior knowledge of conditions that might be related to the event.

A. true
B. false
Answer» B. false
245.

Conditional probability is a measure of the probability of an event given that another

A. true
B. false
Answer» B. false
246.

In given image, P(H) is                  probability.

A. posterior
B. prior
Answer» C.
247.

In given image, P(H|E) is                  probability.

A. posterior
B. prior
Answer» B. prior
248.

Bayes Theorem is given by where 1. P(H) is the probability of hypothesis H being true. 2. P(E) is the probability of the evidence(regardless of the hypothesis). 3. P(E|H) is the probability of the evidence given that hypothesis is true. 4. P(H|E) is the probability of the hypothesis given that the evidence is there.

A. true
B. false
Answer» B. false
249.

Features being classified is                    of each other in Nave Bayes Classifier

A. independent
B. dependent
C. partial dependent
D. none
Answer» B. dependent
250.

Features being classified is independent of each other in Nave Bayes Classifier

A. false
B. true
Answer» C.