m-adjacent pixel representation in retinal images

M-ADJACENT PIXEL REPRESENTATION IN RETINAL
IMAGES
T.Ravi, B.M.S Rani, SK.Ibrahim
1. Asso. Prof, Department of ECE, K L University, Guntur, A.P, India 2. Assistant Prof, Department of
ECE,vignan Nirula, Guntur, A.P, India 3. B. Tech Students, Department of ECE, K L University, Guntur, A.P,
India .
ABSTRACT:
This
paper
presents
morphological process to identify the blood
vessels in retinal images. The output of most
sensors is a continuous voltage wave form whose
amplitude and spatial behaviour(x,y) are
continuous. To convert the continuous image in
to digital form, we have to sample the function
in the both coordinates and in amplitude.
Digitizing the coordinate values is called
sampling and digitizing the amplitude values is
called quantization.
Key words : pixel-(picture element); array (set of
rows and columns); sampling(digitization of coordinates);
quantisation(digitization
of
amplitude); region (set of connecting elements).
INTRODUCTION : the blood vessels of retinal
image has a special or unique pattern from eye-eye.
We are using this special trait of blood vessels in
identifying retinal disease.the image which is
getting from the source is a continuous with respect
to coordinates and amplitude. So to do the process
on an image we have to digitize the image in coordinates and also amplitude.
ACQUISITION OF ORIGINAL IMAGE
An image may be defined as a two-dimensional
function(x,y), where x and y are spatial(plane)
coordinates, and the amplitude of f at any pair of
coordinates(x,y) is called the intensity or gray level
of the image at that point. When x,y and the
amplitude values of f are all finite, discrete
quantities, we call the image a ‘’Digital image’’.
The field of digital image processing refers to
processing digital image by means of a digital
computer. Digital image is composed of a finite of
elements, each of which has a particular location
and value. These elements are referred to as picture
elements, image elements, pels, and pixels. Pixel is
the term most widely used to denote the elements
of a digital image . The above figure shows the
components of a single sensor. The most familiar
sensor is the photo diode, which is constructed of
silicon materials and whose output voltage
waveform is proportional to light.
Fig-1: Original image
The use of a filter in front of a sensor
improves selectivity. For example, a green (pass)
filter in front of a light sensor favours light in the
green band of the colour spectrum, that means the
sensor output will be stronger for green light than
for other components in the visible spectrum. 2-D
image generation using single sensor: in order to
generate a 2-D image using a single sensor, there
has to be relative displacements in both the x- and
y-directions between the sensor and the area to be
imaged. The below fig shows an arrangement used
in high-precision scanning, where a film negative is
mounted on to a drum whose mechanical rotation
provides displacement in one direction. The single
sensor is mounted on a lead screw that provides
motion in the perpendicular direction. This method
is in expensive because the mechanical motion can
be controlled with high precision. This method is
used to obtained high resolution images. The
drawback of this method is slow process. Image
acquisition using sensor strips. In this method all
the single sensors arranged in the form of strip. The
strip provides imaging elements in one direction
and the motion perpendicular to the strip provides
imaging in the other direction. This type of
arrangement used in airborne image applications, in
which the imaging system is mounted on an aircraft
that flies at a constant altitude and speed over the
geographical area to be imaged. One- dimensional
imaging sensor strips that respond to various bands
of the electromagnetic spectrum are mounted
perpendicular to the direction of flight. The
imaging strip gives one line of an image at a time,
and the motion of the strip completes the
otherdimension of a 2-D image .lenses or other
focusing schemes are used to project the area to be
scanned on to the sensors. Image acquisition using
sensor arrays :Here numerous electromagnetic and
some ultrasonic sensing devices are arranged in an
arry format. This is also the predominant
arrangement in digital cameras. A typical sensor
for these cameras is a CCD array, which can be
manufactured with a broad range of sensing
properties and can be packaged in rugged arrays of
4000x4000 elements or more. CCD sensors are
used widely in digital cameras and other light
sensing instruments. The response of each sensor is
proportional to the integral of the light energy
projected onto the surface of the sensor.
A simple image formation:
Let f(x,y) is the two dimensional function
of an image. When an image is generated by a
physical process, its values are proportional to
energy radiated by a physical source(e.g.,
electromagnetic waves).
So f(x,y) must be nonzero and finite.
i.e 0<f(x,y)< .
Here Lmin is positive and Lmax finite.
Lmin=iminrmin and Lmax=imaxrmax
Grayscale
can
be
represented
as
interval[Lmin,Lmax] Or [0,L-1]; where l=0 is
considered black and l=L-1 is considered white on
the gray scale.
IMAGE SAMPLINING AND QUANTIZATION:
The output of most sensors is a continuous
voltage wave form whose amplitude and spatial
behaviour(x,y) are continuous. To convert the
continuous image in to digital form, we have to
sample the function in t he both coordinates and in
amplitude. Digitizing the coordinate values is
called sampling and digitizing the amplitude values
is called quantization.
Sampling : the process of sampling an image is the
process of appling a two-dimensional grid to a
spatially continuous image to devide it into a twodimensional array.
ENHANCEMENT OF GREEN IMAGE FROM
COLOURED IMAGE:
Image formed via reflection:
The function f(x,y) may be characterized by two
components.
Green image has more bright than red and
blue image or blue image is blurred image and red
image is the high noise image.
1. The amount of source illumination incident on
the scene being viewed, which is called
illumination components denoted by i(x,y) and
2. The amount of illumination reflected by the
objects in the scene. Which is called as reflectance
components denoted by r(x,y).
The two functions combine as a product to
give f(x,y).
There fore f(x,y)=i(x,y)r(x,y)---1
0<i(x,y)< .---2
0<r(x,y)< 1---3
e.q(3) indicates that reflectance is bounded by
0(total obsorption) and 1 (total reflectance).
The nature of i(x,y) is determined by the
illumination source and r(x,y) is determined by the
charectristics of the imaged objects.
Image formed via transmission :
Transmittivity t(x,y)
Intensity of light : in image processing the
intensity of light can be represented by graylevel
representation denoted by l.
The intensity of a mono chrome image at
any co-ordinates (x0,y0) is
L=f(x0,y0)
Where Lmin l Lmax.
Position of the pixel:
[749.354838709677 498.370967741935
6.29032258064514 4.25806451612902]
CONCLUSION:
This paper concludes that by knowing the pixel
location with m-adjunct, it is easy to detect the
edges of the objects in an image. The values of R,G
and B will give the intensity levels or amplitude
values of that particular pixel.
References:
1. David Huang,Peter K Kaiser-Retinal Imaging,
2.Rafael
C.Gonzalez,Richard
E.Woods-Digital
Processing
3.Al Bovik- Guide To Image Processing.
Image