Download Digital Image Processing Question Answer Bank PDF

TitleDigital Image Processing Question Answer Bank
File Size388.5 KB
Total Pages69
Table of Contents
                            1.Discuss  different mean filters
	Arithmetic mean filter
	Geometric mean filter
1. Write short notes on image segmentation.
Document Text Contents
Page 2




1. Define Image
An image may be defined as two dimensional light intensity function f(x, y) where x and
y denote spatial co-ordinate and the amplitude or value of f at any point (x, y) is called
intensity or grayscale or brightness of the image at that point.

2. What is Dynamic Range?
The range of values spanned by the gray scale is called dynamic range of an

image. Image will have high contrast, if the dynamic range is high and image will have
dull washed out gray look if the dynamic range is low.

3. Define Brightness
Brightness of an object is the perceived luminance of the surround. Two objects

with different surroundings would have identical luminance but different brightness.

5. What do you meant by Gray level?
Gray level refers to a scalar measure of intensity that ranges from black to grays

and finally to white.

6. What do you meant by Color model?
A Color model is a specification of 3D-coordinates system and a subspace within

that system where each color is represented by a single point.

7. List the hardware oriented color models
1. RGB model
2. CMY model
3. YIQ model
4. HSI model

8. What is Hue and saturation?
Hue is a color attribute that describes a pure color where saturation gives a

measure of the degree to which a pure color is diluted by white light.

9. List the applications of color models
1. RGB model--- used for color monitors & color video camera
2. CMY model---used for color printing
3. HIS model----used for color image processing
4. YIQ model---used for color picture transmission

10. What is Chromatic Adoption?
` The hue of a perceived color depends on the adoption of the viewer. For example,
the American Flag will not immediately appear red, white, and blue of the viewer has

Page 34

e =E{(f-f^)2}

where E{.}is the expected value of the argument.
• It is assumed that the noise and the image are uncorrelated,that one or other has

zero mean:and that the gray levels in the estimate are a linear function of the
levels in the degradated image.

• Based on these conditions,the minimum of the error function in Eq is given in the
frequency domain by the expression

 This result is known as the wiener filter after N.Wiener,who
proposed the concepts in the year shown.the filter which consists
of the term inside the brackets also is commonly referred to as the
minimum mean square error filter or the least square error filter.

• We include references at the end of sources containing detailed derivations of the
wiener filter.

 The restored image in the spatial domain is given by the inverse
Fourier transform of the frequency domain estimate F(u,v).

• If the noise is zero,then the noise power spectrum vanishes and the wiener filter
reduces to the inverse filter.

 However the power spectrum of the undegraded image seldom is
known. Where k is a specified constant.

 Example illustrates the power spectrum of wiener filtering over
direct inverse filtering.the value of K was chosen interactively to
yield the best visual results.

o It illustrates the full inverse filtered result similarly is the radially limited
inverse filter .

• These images are duplicated here for convenience in making comparisons.
 As expected ,the inverse filter produced an unusable image.The

noise in the inring filter.
• The wiener filter result is by no means perfect,but it does give us a hint as to

image content.
• The noise is still quite visible, but the text can be seen

through a “curtain” of noise.

1.Explain Histogram processing

• The Histogram of the digital image with gray levels in the range [0,L-1]is the
discrete function p(rk)=nk/n where rk is the kth gray level, nk is the number of
pixel,n is the total number of pixel in the image and k=0,1,2,…….L-1.

• P(rk) gives the an estimate probability of occurrence of gray-level rk.. Figure
show the the histogram of four basic types of images.

Figure: Histogram corresponding to four basic image types

Page 35

Histrogram Equalization
• Let the variable r represent the gray levels in the image to be enhanced. The pixel

value are continous quantities normalized that lie in the interval [0,1] with r=0
represent black with r=1 represent white.

• The transformation of the form
• S=T(r) …………………………………(1)

• Which produce a level s for every pixel value r in the original satisfy

o T(r) is the single-valued and monotonically increasing in the interval
0≤r≤1 and

o 0≤T(r)≤1 for 0≤r≤1
 Condition 1 preserves the order from black to white in the gray

 Condition 2 guarantees a mapping that is consistent with the

allowed range of pixel values.
R=T-¹(s) 0≤s≤1 ………………………..(2)

• The probability density function of the transformed graylevel is
Ps(s)=[pr(r)dr/ds] r=T-¹(s) …………………….(3)

• Consider the transformation function
S=T(r)= ∫Pr(w)dw 0≤r≤1 …………………….(4)

Where w is the dummy variable of integration .
From Eqn(4) the derivative of s with respect to r is

Substituting dr/ds into eqn(3) yields

Ps(s)=[1] 0≤s≤1

Page 68

• For facsimile images, p(w/w) and p(w/b) are generally significantly higher than
p(b/w) and p(b/b)

• The markov model is represented by the state diagram
• The entropy using a probability model and the iid assumption was significantly

more than the entropy using the markov model
• Let us try to interpret what the model says about the structure of the data .
• The highly skewed nature of the probabilities p(b/w) and p(w/w),and to a lesser

extent p( w/b) and p(b/b), says that once a pixel takes on a particular color, it is
highly likely that the following pixels will also be of the same color

• So, rather than code the color of each pixel separately , we can simply code
the length of the runs of each color .

• For example, if we had 190 white pixels followed by 30 black pixels ,
followed by another 210 white pixels , instead of coding the 430 pixels
individually, we would code the sequence 190, 30, 210, along with an
indication of the color of the first string of pixels .

• Coding the lengths of runs instead of coding individual values is called run-
length coding


• The one dimensional coding scheme is a run-length coding scheme in which
each line is represented as a series of alternating white runs and black runs.
The first run is always a white run. If the first pixel is a black pixel, then we
assume that we have a white run of length zero.

• Runs of different lengths occur with different probabilities, therefore they are
coded using a variable length code..

• The number of possible lengths of runs is extremely large and it is not simply
feasible to build a codebook that large.

• Therefore instead of generating a Huffman code for each run length r1, the the
run length is expressed in the form

R1=64*m+t for t=0.1….63 and m=1,2…..27
• When we have to represent a run length r1, instead of finding a code for r1,we

use the corresponding codes for m and t.
• The codes for t are called the terminating codes and the codes for m are called

make up codes.

Similer Documents