MCQOPTIONS
Saved Bookmarks
This section includes 375 Mcqs, each offering curated multiple-choice questions to sharpen your Computer Science Engineering (CSE) knowledge and support exam preparation. Choose a topic below to get started.
| 151. |
From the following given tree, what is the computed codeword for ‘c’? |
| A. | 111 |
| B. | 101 |
| C. | 110 |
| D. | 011 |
| Answer» D. 011 | |
| 152. |
From the following given tree, what is the code word for the character ‘a’? |
| A. | 011 |
| B. | 010 |
| C. | 100 |
| D. | 101 |
| Answer» B. 010 | |
| 153. |
The code length does not depend on the frequency of occurrence of characters. |
| A. | true |
| B. | false |
| Answer» C. | |
| 154. |
Which bit is reserved as a parity bit in an ASCII set? |
| A. | first |
| B. | seventh |
| C. | eighth |
| D. | tenth |
| Answer» D. tenth | |
| 155. |
How many bits are needed for standard encoding if the size of the character set is X? |
| A. | log x |
| B. | x+1 |
| C. | 2x |
| D. | x2 |
| Answer» B. x+1 | |
| 156. |
How many printable characters does the ASCII character set consists of? |
| A. | 120 |
| B. | 128 |
| C. | 100 |
| D. | 98 |
| Answer» D. 98 | |
| 157. |
Which of the following algorithms is the best approach for solving Huffman codes? |
| A. | exhaustive search |
| B. | greedy algorithm |
| C. | brute force algorithm |
| D. | divide and conquer algorithm |
| Answer» C. brute force algorithm | |
| 158. |
Which are uniquely decodable codes? |
| A. | fixed length codes |
| B. | variable length codes |
| C. | fixed & variable length codes |
| D. | none of the mentioned |
| Answer» B. variable length codes | |
| 159. |
A rate distortion function is a |
| A. | concave function |
| B. | convex function |
| C. | increasing function |
| D. | none of the mentioned |
| Answer» C. increasing function | |
| 160. |
Which achieves greater compression? |
| A. | lossless coding |
| B. | lossy coding |
| C. | lossless & lossy coding |
| D. | none of the mentioned |
| Answer» C. lossless & lossy coding | |
| 161. |
Which coding method uses entropy coding? |
| A. | lossless coding |
| B. | lossy coding |
| C. | lossless & lossy coding |
| D. | none of the mentioned |
| Answer» C. lossless & lossy coding | |
| 162. |
Which reduces the size of the data? |
| A. | source coding |
| B. | channel coding |
| C. | source & channel coding |
| D. | none of the mentioned |
| Answer» B. channel coding | |
| 163. |
ASCII code is a |
| A. | fixed length code |
| B. | variable length code |
| C. | fixed & variable length code |
| D. | none of the mentioned |
| Answer» B. variable length code | |
| 164. |
Mutual information should be |
| A. | positive |
| B. | negative |
| C. | positive & negative |
| D. | none of the mentioned |
| Answer» D. none of the mentioned | |
| 165. |
In digital image coding which image must be smaller in size? |
| A. | input image |
| B. | output image |
| C. | input & output image |
| D. | none of the mentioned |
| Answer» C. input & output image | |
| 166. |
While recovering signal, which gets attenuated more? |
| A. | low frequency component |
| B. | high frequency component |
| C. | low & high frequency component |
| D. | none of the mentioned |
| Answer» C. low & high frequency component | |
| 167. |
Lempel-Ziv algorithm is |
| A. | variable to fixed length algorithm |
| B. | fixed to variable length algorithm |
| C. | fixed to fixed length algorithm |
| D. | variable to variable length algorithm |
| Answer» B. fixed to variable length algorithm | |
| 168. |
Which is more efficient method? |
| A. | encoding each symbol of a block |
| B. | encoding block of symbols |
| C. | encoding each symbol of a block & encoding block of symbols |
| D. | none of the mentioned |
| Answer» C. encoding each symbol of a block & encoding block of symbols | |
| 169. |
Entropy of a random variable is |
| A. | 0 |
| B. | 1 |
| C. | infinite |
| D. | cannot be determined |
| Answer» D. cannot be determined | |
| 170. |
The self information of random variable is |
| A. | 0 |
| B. | 1 |
| C. | infinite |
| D. | cannot be determined |
| Answer» D. cannot be determined | |
| 171. |
When X and Y are statistically independent, then I (x,y) is |
| A. | 1 |
| B. | 0 |
| C. | ln 2 |
| D. | cannot be determined |
| Answer» C. ln 2 | |
| 172. |
When the base of the logarithm is 2, then the unit of measure of information is |
| A. | bits |
| B. | bytes |
| C. | nats |
| D. | none of the mentioned |
| Answer» B. bytes | |
| 173. |
The method of converting a word to stream of bits is called as |
| A. | binary coding |
| B. | source coding |
| C. | bit coding |
| D. | cipher coding |
| Answer» C. bit coding | |
| 174. |
The event with minimum probability has least number of bits. |
| A. | true |
| B. | false |
| Answer» C. | |
| 175. |
When probability of error during transmission is 0.5, it indicates that |
| A. | channel is very noisy |
| B. | no information is received |
| C. | channel is very noisy & no information is received |
| D. | none of the mentioned |
| Answer» D. none of the mentioned | |
| 176. |
Which is the main system consideration? |
| A. | probability of error |
| B. | system complexity |
| C. | random fading channel |
| D. | all of the mentioned |
| Answer» E. | |
| 177. |
The unit of average mutual information is |
| A. | bits |
| B. | bytes |
| C. | bits per symbol |
| D. | bytes per symbol |
| Answer» B. bytes | |
| 178. |
Self information should be |
| A. | positive |
| B. | negative |
| C. | positive & negative |
| D. | none of the mentioned |
| Answer» B. negative | |
| 179. |
Which is easier to implement and is preferred? |
| A. | coherent system |
| B. | non coherent system |
| C. | coherent & non coherent system |
| D. | none of the mentioned |
| Answer» C. coherent & non coherent system | |
| 180. |
Coherent PSK and non coherent orthogonal FSK have a difference of              in PB. |
| A. | 1db |
| B. | 3db |
| C. | 4db |
| D. | 6db |
| Answer» D. 6db | |
| 181. |
The DPSK needs                  Eb/N0 than BPSK. |
| A. | 1db more |
| B. | 1db less |
| C. | 3db more |
| D. | 3db less |
| Answer» B. 1db less | |
| 182. |
A Gaussian distribution into the non linear envelope detector yields |
| A. | rayleigh distribution |
| B. | normal distribution |
| C. | poisson distribution |
| D. | binary distribution |
| Answer» B. normal distribution | |
| 183. |
For AWGN, the noise variance is |
| A. | n0 |
| B. | n0/2 |
| C. | 2n0 |
| D. | n0/4 |
| Answer» C. 2n0 | |
| 184. |
The disadvantage of preset equalizer is that |
| A. | it doesnot requires initial training pulse |
| B. | time varying channel degrades the performance of the system |
| C. | all of the mentioned |
| D. | none of the mentioned |
| Answer» C. all of the mentioned | |
| 185. |
Preamble is used for |
| A. | detect start of transmission |
| B. | to set automatic gain control |
| C. | to align internal clocks |
| D. | all of the mentioned |
| Answer» E. | |
| 186. |
Equalization method which is done by tracking a slowly time varying channel response is |
| A. | preset equalization |
| B. | adaptive equalization |
| C. | variable equalization |
| D. | none of the mentioned |
| Answer» C. variable equalization | |
| 187. |
If the filter’s tap weight remains fixed during transmission of data, then the equalization is called as |
| A. | preset equalization |
| B. | adaptive equalization |
| C. | fixed equalization |
| D. | none of the mentioned |
| Answer» B. adaptive equalization | |
| 188. |
The over-determined set of equations can be solved using |
| A. | zero forcing |
| B. | minimum mean square error |
| C. | zero forcing & minimum mean square error |
| D. | none of the mentioned |
| Answer» D. none of the mentioned | |
| 189. |
As the eye opens, ISI |
| A. | increases |
| B. | decreases |
| C. | remains the same |
| D. | none of the mentioned |
| Answer» C. remains the same | |
| 190. |
The range of amplitude difference gives the value of |
| A. | width |
| B. | distortion |
| C. | timing jitter |
| D. | noise margin |
| Answer» C. timing jitter | |
| 191. |
Source encoding procedure does |
| A. | sampling |
| B. | quantization |
| C. | compression |
| D. | all of the mentioned |
| Answer» E. | |
| 192. |
The primary advantage of this method is |
| A. | redistribution of spectral density |
| B. | to favor low frequencies |
| C. | redistribution of spectral density & to favor low frequencies |
| D. | none of the mentioned |
| Answer» D. none of the mentioned | |
| 193. |
The index value n, in transversal filter can be used as. |
| A. | time offset |
| B. | filter coefficient identifier |
| C. | time offset & filter coefficient identifier |
| D. | none of the mentioned |
| Answer» D. none of the mentioned | |
| 194. |
In polybinary signalling method the present bit of binary sequence is algebraically added with              number of previous bits. |
| A. | j |
| B. | 2j |
| C. | j+2 |
| D. | j-2 |
| Answer» E. | |
| 195. |
The method which has greater bandwidth efficiency is called as |
| A. | duobinary signalling |
| B. | polybinary signalling |
| C. | correlative coding |
| D. | all of the mentioned |
| Answer» C. correlative coding | |
| 196. |
The duobinary filter, He (f) is called as |
| A. | sine filter |
| B. | cosine filter |
| C. | raised cosine filter |
| D. | none of the mentioned |
| Answer» C. raised cosine filter | |
| 197. |
The method using which the error propagation in dubinary signalling can be avoided is |
| A. | filtering |
| B. | precoding |
| C. | postcoding |
| D. | none of the mentioned |
| Answer» C. postcoding | |
| 198. |
In precoding technique, the binary sequence is            with the previous precoded bit. |
| A. | and-ed |
| B. | or-ed |
| C. | exor-ed |
| D. | added |
| Answer» D. added | |
| 199. |
In duobinary signalling method, for M-ary transmission, the number of output obtained is |
| A. | 2m |
| B. | 2m+1 |
| C. | 2m-1 |
| D. | m2 |
| Answer» D. m2 | |
| 200. |
Which of the following is true for a Gaussian filter? |
| A. | large bandwidth |
| B. | minimum isi |
| C. | high overshoot |
| D. | sharp cut off |
| Answer» E. | |