February 2013

Tuesday 26 February 2013

1. What is meant by Digital Image Processing? Explain how digital images can be represented?



>Table of contents
 
An image may be defined as a two-dimensional function, f(x, y), where x and y are spatial
(plane) coordinates, and the amplitude of f at any pair of coordinates (x, y) is called the intensity
or gray level of the image at that point. When x, y, and the amplitude values of f are all finite,
discrete quantities, we call the image a digital image. The field of digital image processing refers
to processing digital images by means of a digital computer. Note that a digital image is
composed of a finite number of elements, each of which has a particular location and value.
These elements are referred to as picture elements, image elements, pels, and pixels. Pixel is the
term most widely used to denote the elements of a digital image.

Vision is the most advanced of our senses, so it is not surprising that images play
the single most important role in human perception. However, unlike humans, who are limited to
the visual band of the electromagnetic (EM) spectrum, imaging machines cover almost the entire
EM spectrum, ranging from gamma to radio waves. They can operate on images generated by
sources that humans are not accustomed to associating with images. These include ultra-sound,
electron microscopy, and computer-generated images. Thus, digital image processing
encompasses a wide and varied field of applications. There is no general agreement among
authors regarding where image processing stops and other related areas, such as image analysis
and computer vision, start. Sometimes a distinction is made by defining image processing as a
discipline in which both the input and output of a process are images. We believe this to be a
limiting and somewhat artificial boundary. For example, under this definition, even the trivial
task of computing the average intensity of an image (which yields a single number) would not be
considered an image processing operation. On the other hand, there are fields such as computer
vision whose ultimate goal is to use computers to emulate human vision, including learning and
being able to make inferences and take actions based on visual inputs. This area itself is a branch
of artificial intelligence (AI) whose objective is to emulate human intelligence. The field of AI is
in its earliest stages of infancy in terms of development, with progress having been much slower
than originally anticipated. The area of image analysis (also called image understanding) is in
between image processing and computer vision.

There are no clear-cut boundaries in the continuum from image processing
at one end to computer vision at the other. However, one useful paradigm is to consider three
types of computerized processes in this continuum: low-, mid-, and high-level processes. Lowlevel
processes involve primitive operations such as image preprocessing to reduce noise,
contrast enhancement, and image sharpening. A low-level process is characterized by the fact
that both its inputs and outputs are images. Mid-level processing on images involves tasks such
as segmentation (partitioning an image into regions or objects), description of those objects to
reduce them to a form suitable for computer processing, and classification (recognition) of
individual objects. A mid-level process is characterized by the fact that its inputs generally are images, but its outputs are attributes extracted from those images (e.g., edges, contours, and the
identity of individual objects). Finally, higher-level processing involves “making sense” of an
ensemble of recognized objects, as in image analysis, and, at the far end of the continuum,
performing the cognitive functions normally associated with vision and, in addition,
encompasses processes that extract attributes from images, up to and including the recognition of
individual objects. As a simple illustration to clarify these concepts, consider the area of
automated analysis of text. The processes of acquiring an image of the area containing the text,
preprocessing that image, extracting (segmenting) the individual characters, describing the
characters in a form suitable for computer processing, and recognizing those individual
characters are in the scope of what we call digital image processing.

Representing Digital Images:

We will use two principal ways to represent digital images. Assume that an image f(x, y) is
sampled so that the resulting digital image has M rows and N columns. The values of the
coordinates (x, y) now become discrete quantities. For notational clarity and convenience, we
shall use integer values for these discrete coordinates. Thus, the values of the coordinates at the
origin are (x, y) = (0, 0). The next coordinate values along the first row of the image are
represented as (x, y) = (0, 1). It is important to keep in mind that the notation (0, 1) is used to
signify the second sample along the first row. It does not mean that these are the actual values of
physical coordinates when the image was sampled. Figure 1 shows the coordinate convention
used.

The notation introduced in the preceding paragraph allows us to write the complete M*N digital image in the following compact matrix form:


The right side of this equation is by definition a digital image. Each element of this matrix array
is called an image element, picture element, pixel, or pel.
>Table of contents

Digital Image Processing Notes study material - Table of contents


Hi Everyone I am sharing my lecture notes on Digital Image Processing. Bookmark this page and check back frequently more questions will be added.

Table of Contents:-











11. Define Fourier Transform and its inverse.

12. Define discrete Fourier transform and its inverse.

13.State and prove separability property of 2D-DFT.

14.State and prove the translation property of 2D-DFT

15.State distributivity and scaling property of 2D DFT

16.Explain the basic principle of Hotelling transform.

17.Write about Slant transform.

18.What are the properties of Slant transform?

19.Define discrete cosine transform.

20.Explain about Haar transform.

21.What are the properties of Haar transform.

22.Write short notes on Hadamard transform.

23.Write about Walsh transform.

24.What is meant by image enhancement by point processing? Define spatial domain.Discuss some of the techniques. 

25.Define histogram of a digital image. Explain how histogram is useful in image enhancement?

26.Write about histogram equalization.

27.Write about histogram specification / matching. 

28.Write about Local enhancement.

29.What is meant by image subtraction? Discuss various areas of application of image subtraction.

30.What is image averaging process. Explain 

31.Discuss about the mechanics of filtering in spatial domain. Mention the points to be considered in implementation of neighbourhood operations for spatial filtering. 

32.What is smoothing spatial filters. 

33.What is meant by the Gradiant and the Laplacian? Discuss their role in image enhancement. 

34. Distinguish between spatial domain and frequency domain enhancement techniques. 

35. Explain about Ideal Low Pass Filter (ILPF) in frequency domain.