Full Paper

ISSN 2394-3777 (Print)
ISSN 2394-3785 (Online)
Available online at www.ijartet.com
International Journal of Advanced Research Trends in Engineering and Technology (IJARTET)
Vol. II, Special Issue XXIII, March 2015 in association with
FRANCIS XAVIER ENGINEERING COLLEGE, TIRUNELVELI
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
INTERNATIONAL CONFERENCE ON RECENT ADVANCES IN COMMUNICATION SYSTEMS AND
TECHNOLOGIES
(ICRACST’15)
TH
25 MARCH 2015
SEGMENTATION OF OPTIC DISC AND OPTIC CUP USING SPATIALLY WEIGHTED FUZZY
C MEAN CLUSTERING AND SUPERPIXEL ALGORITHM
K.Gowri
ME-Applied Electronics
Shri Andal Alagar College of Engineering
Mamandur
Dr.T.R.Ganesh Babu
Professor/ECE
Shri Andal Alagar College of Engineering
Mamandur
Abstract- Glaucoma is one of the leading cause of blindness if
it is not detected and treated in proper way. When there is an
elevated intra ocular pressure from the normal condition, the
subject is affected by glaucoma and in this condition the
retinal nerve fiber layer and the optic disc are affected and this
leads to progressive loss of vision if not diagnosed and treated.
Detection of this Glaucoma is very difficult task and Current
tests using intraocular pressure (IOP) testing are not sensitive
enough for population based glaucoma screening. In this
paper, glaucoma screening is treated by optic disc & cup
segmentation. Spatially Weighted Fuzzy C Mean(SWFCM)
Clustering method is used to segment the optic disc and
Superpixel algorithm is used to segment the optic cup. The
segmented optic disc and cup are then used to compute the
CDR for glaucoma screening. optic nerve head is also called
as optic disc. optic disc and cup segmentation of fundus image
is considered for the detection of glaucoma.
Keywords:Fundus image, glaucoma , cup to disc ratio, super
pixel.
aqueous humor is present in the eye. The aqueous humor is
produced by the ciliary body and is drained through the Canal
of Schlemm. If the aqueous humor does not drain out
correctly, then pressure will build up in the eye. Figure 1(a)
shows the normal fluid flow, which indicates the normal
pressure that acts in the eye. Figure 1(b) shows the blocked
fluid which indicates the elevated pressure that acts in the eye.
This high pressure damages the optic nerve leading to
glaucoma. visual loss is prevented by the timely diagnosis and
referral for management of these diseases. Hence the shortage
of trained personnel leads to the need for automatic retinal
image analysis system. Glaucoma is the most common cause
of irreversible blindness in the world. The World Health
Organization estimated the number of people who became
blind from glaucoma is 4.4 million in 2002. Globally, 60.5
million people are affected by glaucoma in 2010.
I. INTRODUCTION
Glaucoma is one of the common causes of blindness. It causes
progressive degeneration of optic nerve fibers and leads to
structural changes of the optic nerve and a simultaneous
functional failure of the visual field. Since, glaucoma is
asymptomatic in the early stages and the associated vision loss
cannot be restored, its early detection and subsequent medical
treatment is essential to prevent further visual damage.
(a)Normal fluid view
(b) Blocked fluid flow
Figure 1: Glaucoma Illustration
A. Glaucoma
Glaucoma is a general term for a family of eye diseases,
which, in most cases, leads to increased pressure within the
eye and as a result, damages the optic nerve. It affects people
of all ages and initially marginal vision is lost. If proper
treatment for glaucoma is not taken, then the vision loss still
continues, leading to total blindness.A watery material called
B. Diagnosis
Screening for glaucoma is usually performed as the part of
standard eye examination performed by ophthalmologists. The
measurements used for the presence of glaucoma are the eye
pressure, size/and shape of the eye, anterior chamber angle,
cup to disc ratio, rim appearance and vascular changes.The
6
All Rights Reserved © 2015 IJARTET
ISSN 2394-3777 (Print)
ISSN 2394-3785 (Online)
Available online at www.ijartet.com
International Journal of Advanced Research Trends in Engineering and Technology (IJARTET)
Vol. II, Special Issue XXIII, March 2015 in association with
FRANCIS XAVIER ENGINEERING COLLEGE, TIRUNELVELI
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
INTERNATIONAL CONFERENCE ON RECENT ADVANCES IN COMMUNICATION SYSTEMS AND
TECHNOLOGIES
(ICRACST’15)
TH
25 MARCH 2015
fundus image morphological changes in optic disc, optic cup
imaging techniques is inevitable in this work of glaucoma
detection which is helpful for strengthening the glaucoma
examination.
C. System Overview
Fig 3 a) Green channel
The input color fundus image is captured by fundus camera.
The color fundus image refers to the inside back portion of the
eye. This image provides an important information of the
retina in eye. Blood vessels, optic disc are captured in a
fundus image. Optic Disc is used to describe the portion of
optic nerve.OD (optic disc) is the entry point for major blood
vessel that supply the retina.The optic nerve head is also
called the optic disc. OD (optic disc) is placed 3 to 4 mm to
the nasal side of fovea. Normally there is a small depression
seen at front of the optic disc, which is known as the optic cup
and its diameter is smaller than the diameter of the optic disc.
The figure 1 shows the input fundus image.
Optic Disc
Optic Cup
Fig 3 b) ROI image
II. SEGMENTATION OF OPTIC DISC USING
SWFCM ALGORITHM
Optic disc is segmented by using SWFCM method
(Keh –Shih Chuang et-al 2006).The Spatial Weighted Fuzzy C
Mean Clustering Algorithm is applied to the ROI image The
main drawbacks of the Fuzzy C Means clustering (FCM) is
very sensitive to noise, does not consider the spatial
information of pixels and in turn the segmentation result is
affected. To overcome the above drawbacks, SWFCM is
used.
One of the important characteristics of an image is that
neighboring pixels are highly correlated. The spatial
relationship is important in clustering, but it is not utilized in a
standard FCM algorithm. To exploit the spatial information, a
spatial function is defined as
Fig 1. Input Fundus Image
(1)
D. REGION OF INTEREST (ROI) IMAGE
The input fundus image has been taken by fundus
camera in RGB mode. Green plane (G) is considered for the
extraction of optic disc, as it provides better contrast than the
other two planes. The maximum brightest point in the (G)
plane is identified and considered . This brightest pixel will
always fall in the cup region. The approximate region of the
optic disc is to be selected around this identified brightest
point. After analyzing 100 number of fundus images of size
1504 x 1000 pixels, it is decided to consider a square of size
360 x 360 pixels with the brightest pixel as the centre point
and this region is found to cover mainly the entire optic disc
along with a small portion of other regions of the image.
Figure 3 a) shows the Green channel and figure 3 b) shows the
ROI Image.
Where represents a square window centered on pixel in
spatial domain. Larger window size may blur the images and
the lower window size does not remove the noise at high
density. Therefore, 5*5 window size is used in this work. Just
like the membership function, the spatial function represents
the probability that the pixels belong to the ith cluster. The
spatial function of pixel is large if the majority of its
neighborhood belongs to the same clusters. The spatial
function is incorporated into membership function as follows
(2)
Where p and q are controlling parameters of both functions.
The spatial functions simply strengthen the original
membership in a homogenous region, but it does not change
clustering result. However, this formula reduces the weighting
7
All Rights Reserved © 2015 IJARTET
ISSN 2394-3777 (Print)
ISSN 2394-3785 (Online)
Available online at www.ijartet.com
International Journal of Advanced Research Trends in Engineering and Technology (IJARTET)
Vol. II, Special Issue XXIII, March 2015 in association with
FRANCIS XAVIER ENGINEERING COLLEGE, TIRUNELVELI
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
INTERNATIONAL CONFERENCE ON RECENT ADVANCES IN COMMUNICATION SYSTEMS AND
TECHNOLOGIES
(ICRACST’15)
TH
25 MARCH 2015
of a noisy cluster in noisy pixels by the labels to its
neighboring pixels. As a result misclassified pixels from noisy
region or spurious blobs can easily be corrected. The spatial
FCM with parameter p and q is denoted If p = 1 and q = 0,
then SFCM1,0 is identical to the conventional FCM.
The clustering is a two-pass process at each
iteration. The first pass is the same as that in standard FCM to
calculate the membership function in the spectral domain. In
the second pass, the membership information of each pixel is
mapped to the spatial domain and the spatial domain function
is computed from that. The FCM iteration proceeds with the
new membership that is incorporated with spatial function.
The iteration is stopped when the maximum difference
between two cluster centers at two successive iterations is less
than 0.00001. After the convergence, defuzzification is
applied to assign each pixel to a specific cluster for which the
membership is maximal.
Algorithm
Step 1: Generate the random number with the range from 0
to1 to be the initial memberships. Let us consider the number
of cluster is N then calculate Vi using 3.
6
Where
The segmented image has three clusters, first and second
cluster represents the background, and third cluster represents
the optic disc. The clustered image is shown in the figure.
2.1.Detection of optic disc
The clustered image has three forms of cluster namely outer,
background, optic disc. To form the optic disc, initially
background cluster is eliminated by searching the corner
cluster index. From the two clusters, the optic disc is identified
by selecting the cluster index at the location of the brightest
point in the ROI image because the optic disc belongs to
higher intensity regions. Fig 4 shows the swfcm optic disc.
Outer
Back ground
Optic disc
Fig 4 SWFCM - optic disc
3
Step 2: Compute 4
2.2.Optic Disc Boundary Smoothening
4
Step 3: Map into the pixel position and calculate the modified
membership . Compute objective function J.
After extracting optic disc, elliptical fitting is applied for
smoothening the optic disc boundary (Fitzgibbon, A, Pilu, M
& Fisher, RB 1999). The labeling i.e., connected components
technique is applied to form the rectangle containing the
whole disc region as shown in Figure 5.a . The centroid of the
rectangle is taken as a center to draw an ellipse using the
equation (7) and (8) and that inscribed in the rectangle as
shown in figure 5.b. The area of the ellipse is calculated by
using the equation (9) (Ganeshbabu, TR & Shenbaga devi, S
2011).
Ellipse E
5
Step 4: Update the center 1
Step 5: Repeat steps 2 to step 4 until the following termination
criterion is satisfied:
X = a*(cosα cosβ) – b*(sinα sinβ)
7
Y = a*(cosα sinβ) + b*(sinα cosβ)
8
8
All Rights Reserved © 2015 IJARTET
ISSN 2394-3777 (Print)
ISSN 2394-3785 (Online)
Available online at www.ijartet.com
International Journal of Advanced Research Trends in Engineering and Technology (IJARTET)
Vol. II, Special Issue XXIII, March 2015 in association with
FRANCIS XAVIER ENGINEERING COLLEGE, TIRUNELVELI
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
INTERNATIONAL CONFERENCE ON RECENT ADVANCES IN COMMUNICATION SYSTEMS AND
TECHNOLOGIES
(ICRACST’15)
TH
25 MARCH 2015
Where ‘a’ is the major axis length (half of the rectangle width)
and b is the minor axis length (half of the rectangle height).
Here and α Varies from
2. A weighted distance measure combines color and spatial
proximity while simultaneously providing control over the
size and compactness of the super pixels.
The area of the ellipse is calculated by using the Equation
Algorithm:
Area=πab
9
Where
a = major axis length (half of the rectangle width) and
b = minor axis length (half of the rectangle height).
Thus the area of optic disc is calculated.
The connected component technique is applied to form the
rectangle containing the whole disc region as shown in the
figure 5a). Figure 5) b shows the contour of this ellipse being
shown on the original ROI.
Fig5 a) Elliptical optic disc
Fig 5 b) Imposed optic disc
III. SEGMENTATION OF OPTIC CUP
The optic cup is the inside portion of the optic disc. Optic
cup is segmented by using ROI Image. In this paper use the
simple linear iterative clustering algorithm (SLIC) to generate
superpixels by aggregating nearby pixels into superpixels in
images.
3.1.SLIC(Simple Linear Iterative Clustering)
SLIC algorithm is fast one, memory efficient and has great
boundary adherence. It is also simple to use.Simple linear
iterative clustering is an adaptation of k-means for super pixel
generation, with two important distinction:
1. The number of distance calculation in the optimization in
dramatically reduced by limiting the search space to a region
proportional to the super pixel size. This reduces the
complexity to be linear in the number of pixels N-and
independent of the no of super pixel k.
SLIC is simple to use and understand. By
default, the only parameter of the algorithm is k, the desired
number of approximately equally sized super pixels. For color
images in the CIELAB color space, the clustering procedure
begins with an initialization step where k initial cluster centers
Ci=[li ai xi yi]T are sampled on a regular grid spaced S pixels
apart. To produce roughly equally sized super pixels, the grid
interval is S= √N/K. The centers are moved towards the
lowest gradient position in a 3×3 neighborhood.
SLIC iteratively
Clustering is then applied. For each
searches for its best matching pixel from the 2S× 2
based on color and spatial proximity
neighborhood around
and then compute the new cluster center based on the found
pixel. The iteration continues until the distance between the
new centers and previous ones is small enough. Finally, a post
processing is applied to enforce connectivity.
Each pixel is associated with the nearest cluster
center whose search region overlaps its location. This is the
key to speeding up our Algorithm because limiting the size of
the search region significantly reduces the number of distance
calculations, and results in a significant speed advantage over
conventional k-means clustering where each pixel must be
compared with all cluster centers. This is the only possible
through the introduction of a distance measure D which
determines the nearest cluster center for each pixel .since the
expected spatial extent of a super pixel is a region of
approximate size S X S, the search for similar pixel is done in
a region 2S X 2S around the super pixel center.
Once each pixel has been associated to the nearest
cluster center, an update step adjusts the cluster centers to be
the mean [l a b x y]T vector of all the pixel s belonging to the
cluster .The L2 norm is used to compute a residual error E
between the new cluster center location and previous cluster
centre locations. Finally, a post processing steps enforces
connectivity by reassigning disjoint pixel to nearby super
pixels. Figure 6 shows the generation of super pixels using
fundus image.
9
All Rights Reserved © 2015 IJARTET
ISSN 2394-3777 (Print)
ISSN 2394-3785 (Online)
Available online at www.ijartet.com
International Journal of Advanced Research Trends in Engineering and Technology (IJARTET)
Vol. II, Special Issue XXIII, March 2015 in association with
FRANCIS XAVIER ENGINEERING COLLEGE, TIRUNELVELI
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
INTERNATIONAL CONFERENCE ON RECENT ADVANCES IN COMMUNICATION SYSTEMS AND
TECHNOLOGIES
(ICRACST’15)
TH
25 MARCH 2015
ds = sqrt(dx^2 + dy^2)
Spatial distance
which simplifies to the distance measure in equation 12 we
use in practice:
D=sqrt(dc2+(ds/S)2m2)
12
Fig 6: Superpixel Generation
3.2.Distance Measure
SLIC SUPER PIXELS correspond to clusters in the labxy
color-image plane space. This presents a problem in defining
the distance measure D,which may not be immediately
obvious. D computes the distance between a pixel I and
cluster center ck .A pixel’s color is represented in the CIELAB
color space [l a b]T , whose range of possible value is known
.The pixel’s position [x,y]T, on the other hand ,may take a
range of values that varies according to the size of the
image.Simply defining D to be the 5D Euclidean distance in
labxy Space will cause inconsistencies in clustering behavior
for different super pixel sizes. For large super pixel ,spatial
distances outweigh color proximity, giving more relative
importance to spatial proximity than color. This produces
compact super pixels that do not adhere well to image
boundaries.For smaller superpixels, the converse is true.
To combine the two distances dc and ds into a
single measure , it is necessary to normalize color proximity
and spatial proximity by their respective maximum distances
within cluster ,Ns and Nc. Distance (D’) is calculated by using
the equation 10
dc =√(lj-li)2+(aj-ai)2+(bj-bi)2,
ds=√(xj-xi)2+(yj-yi)2,
D’=√(dc/Nc)2+(ds/Ns)2
10
The maximum spatial distance expected with in a given
cluster
should correspond to the sampling interval,
Ns=S=√(N/K).To Determine the distance Nc is not so straight
forward,as color distance can vary significantly from cluster to
cluster and image to image.This problem can be avoided by
fixing Nc to a constant m. The m value is Substituting in the
equation 10,we get the equation 11
Distance = sqrt( dc^2 + (ds/s)^2*m^2 )
11
where:
dc = sqrt(dl^2 + da^2 + db^2) Colour distance
When m is small , the resulting super pixel s
adhere more tightly to image boundaries ,butr have less
regular size and shape.m is a weighting factor representing the
nominal maximum colour distance expected so that one can
rank colour similarity relative to distance similarity.try m for
L*a*b* space.After that it is very important to add features
Superpixel consists of a group of pixels with similar colors.
In this paper use center surround statistics (CSS)
from superpixels as a texture feature. To compute CSS, eight
spatial scale dyadic Gaussian pyramids are generated with a
ratio from 1:1 (level 0) to 1:256. Multiple scales are used as
the scale of the blob-like structures largely vary The dyadic
Gaussian pyramid is a hierarchy of low-pass filtered versions
of an image channel. It is accomplished by convolution with a
linearly separable Gaussian filter and decimation by a factor
of two. Then center surround operation between center (finer)
levels c =2,3,4 and surround levels (coarser) S = c + d with d
= 3,4 is applied to obtain six maps empirically computed at
levels of 2–5, 2–6, 3–6, 3–7, 4–7, and 4–8 from an image
channel.
Gaussian gradient will give change in the color
intensity in the targeted region with the neighboring area.From
the obtained features to determine the mean and variance.
3.3.3.SVM CLASSIFIER
In this paper use Support Vector Machine
(SVM) as a classifier (Smola et-al 1998). first create active
training data set to be a subset of the available training data set
(pool) and all training iteration is made on the active set. This
returns a preliminary classifier. Then classifier is used as a
pool evaluator training set is increased in every round of
training by misclassified in the previous iteration and continue
the process until there is no new action in the classification
accuracy or the maximum iterations have been reached. Here
it is not directly using the binary classification results from
SVM, decision function are used from SVM output values.
Each superpixel output is used as the decision
values for all pixels. After that smoothed decision value is
obtained by mean filter. Then a binary decision for all pixels
with a threshold is obtained using smoothed decision values.
Assign +1 and -1 to positive (CUP) and negative (NON-CUP)
samples, and the threshold is the average of them. Figure 6
10
All Rights Reserved © 2015 IJARTET
ISSN 2394-3777 (Print)
ISSN 2394-3785 (Online)
Available online at www.ijartet.com
International Journal of Advanced Research Trends in Engineering and Technology (IJARTET)
Vol. II, Special Issue XXIII, March 2015 in association with
FRANCIS XAVIER ENGINEERING COLLEGE, TIRUNELVELI
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
INTERNATIONAL CONFERENCE ON RECENT ADVANCES IN COMMUNICATION SYSTEMS AND
TECHNOLOGIES
(ICRACST’15)
TH
25 MARCH 2015
a,b,c shows the SLIC output , Segmented Optic cup With
Elliptical Fitting, Segmented Optic cup With Actual Fitting.
Fig 9 a)Normal image
CDR=0.321
Fig 7 a) slic oc
Fig 7 b)actual fitting
Fig 7 c)elliptical
fitting
IV. EXPERIMENTAL RESULTS
4.1.Optic Disc and Optic Cup Segmentation
The developed algorithm is tested on 10 normal
fundus images and 10 fundus images obtained from
glaucoma patients. Figures 8 (a) and 8 (b) shows the actual
segmented contour for optic cup and optic disc for normal and
glaucoma conditions.Figures 9 (a) and 9 (b) shows the
elliptical fitting optic cup and disc for normal and glaucoma
conditions respectively. The CDR values for normal and
abnormal images have been calculated by the developed
algorithm and they are listed in Tables 1 and 2. In the
Tables 1 and 2, the first column shows the subject number,
the second column indicates the CDR calculated by the
present algorithm with actual output, the third column
indicates the CDR calculated by the present algorithm with
elliptical output.Area of the resultant image is computed
interms of number of pixels and the cup to disc ratio is
computed as the ratio of the area of the optic cup to the
area of the optic disc.The CDR value greater than 0.3
which indicates glaucoma (Hossam El-Din MA Khalil et-al
2013).
Fig 8 a) Normal image
CDR=0.314
Fig 8 b) Abnormal image
CDR=0.404
Figure 8. CDR computations for both actual segmented
contour for optic cup and optic disc.
Fig 9 b) Abnormal image
CDR=0.416
Figure 9. CDR computations with elliptical fitting for both
optic cup and optic disc(k means clustering
TABLE 1.NORMAL
TABLE 2 ABNORMAL
Seria
CDR
CDR
l
Compute Compute
Num d from
d from
ber
actual Elliptical
of
image
image
Subj
1
0.3146
0.3210
Seria CDR
CDR
l
Compu Compute
Num
ted
d from
ber
from Elliptical
of
actual
image
1
0.4049
0.416
2
0.3141
0.3221
2
0.4040
0.4112
3
0.3147
0.3327
3
0.3792
0.3921
4
0.1815
0.2121
4
0.4639
0.4731
5
0.2736
0.2964
5
0.4901
0.5201
6
0.3141
0.3214
6
0.3741
0.3810
7
0.3251
0.3315
7
0.4925
0.5322
8
0.321
0.3354
8
0.4214
0.4314
9
0.2680
0.2816
9
0.450
0.4642
10
0.261
0.2898
10
0.4210
0.4321
V.CONCLUSION
An automated method for detection of glaucoma in retinal
fundus image is investigated. Such computerized system is
very useful for diagnosing glaucoma in the process of mass
screening for glaucoma. In this thesis, extracting features like
CDR from the fundus image have been proposed and used for
classification. To increase the accuracy further, more work
can be done on the following to enhance the accuracy in
detection of glaucoma.Analysis of ISNT ratio in fundus image
can be added as future work for strengthening the glaucoma
examination. Higher resolution fundus image may be used for
future work. As it provides clear visibility of optic disc, optic
11
All Rights Reserved © 2015 IJARTET
ISSN 2394-3777 (Print)
ISSN 2394-3785 (Online)
Available online at www.ijartet.com
International Journal of Advanced Research Trends in Engineering and Technology (IJARTET)
Vol. II, Special Issue XXIII, March 2015 in association with
FRANCIS XAVIER ENGINEERING COLLEGE, TIRUNELVELI
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
INTERNATIONAL CONFERENCE ON RECENT ADVANCES IN COMMUNICATION SYSTEMS AND
TECHNOLOGIES
(ICRACST’15)
TH
25 MARCH 2015
cup and blood vessels that increases the possibility of exact
segmentation of optic disc, optic cup and ISNT feature
calculation.
REFERENCES
[1]. Chih-Yin Ho, Tun-Wen Pai, Hao-Teng Chang & Hsin-Yi
Chen, 2011, ‘An automatic fundus image analysis system for
clinical diagnosis of glaucoma’, International Conference on
Complex, Intelligent, and Software Intensive Systems, pp.
559-564.
[2]. Chisako Muramatsu, Toshiaki Nakagawa, Akira Sawada,
Yuji Hatanaka, Tetsuya Yamamoto, Hiroshi Fujita, 2011,
‘Automated determination of cup-to-disc ratio for
classification of glaucomatous and normal eyes on stereo
retinal fundus images’, Journal of Biomedical optics, vol. 16,
no. 9, pp. 1-7.Technology (IJCCT), vol. 2, no. 6, pp. 7-10.
[3]. Fitzgibbon, A, Pilu, M & Fisher, RB 1999, ‘Direct least
square fitting of ellipses’, IEEE T Pattern, Anal, vol. 21, pp.
476-480.
[4]. Ganeshbabu, TR & Shenbaga devi, S 2011, ‘Automatic
detection of Glaucoma using fundus image’, European Journal
of Scientific and Research, vol. 59, no. 1, pp. 22-32.
[5]. Hossam El-Din MA Khalil, Mohamed Yasser Sayed Saif,
Mohamed Osman Abd El-Khalek & Arsany Maker 2013,
‘Variations of Cup -to –Disc Ratio in age Group(18-40) years
old’, Research in Opthalmology, vol. 2, no. 1. pp. 4-9
[6]. Jagadish Nayak, Rajendra Acharya, P, Subbanna Bha,
NakulShetty & Teik-Cheng Lim 2009, ‘Automated Diagnosis
of Glaucoma Using Digital Fundus Images’, Journal of
Medical
Systems,
vol.
33,
no.
5,
pp. 337-346.
[7]. Jun Cheng, Jiang Liu, Yanwu Xu, Fengshou Yin, Damon
Wing, Kee Wong, Ngan-Meng Tan, Dacheng Tao, Ching-Yu
Cheng, Tin Aung & Tien Yin Wong 2013, ‘Superpixel
Classification Based Optic Disc and Optic Cup Segmentation
for Glaucoma Screening’, IEEE Transactions on Medical
Imaging, vol. 32, no. 6, pp. 1019-1032.
[8]. Keh –Shih Chuang, Hong –Long Tzeng, SharonChen,
JayWu & Tzong –JerChen 2006, ‘Fuzzy C-Means Clustering
with Spatial Information For Image Segmentation’, Journal of
Computerized Medical Imaging and Graphics ,vol. 30, pp. 915.
[9]. Kevin noronha, Nayak, J&Bhat, SN 2006, ‘Enhancement
of retinal fundus Image to highlight the features for detection
of abnormal eyes’, IEEE Region 10 Conference TENCON,
pp. 1-4.
[10]. Liu, J, Wong, DWK, Lim, JH, Jia, X, Yin, F, Li, H,
Xiong, W & Wong, TY 2008, ‘Optic Cup and Disk Extraction
from Retinal Fundus Images for Determination of Cup-to-Disc
Ratio’, 3rd IEEE Conference on Industrial Electronics and
Applications, ICIEA, pp. 1828-18.
[11]. Madhusudan Mishra, Malaya Kumar Nath &
Samarendra Dandapat, 2011, ‘Glaucoma Detection from Color
Fundus Images’, International Journal of Computer &
Communication
[12]. Smola, Alex, J, Bernhard Schölkopf & Klaus-Robert
Müller, 1998, ‘The connection between regularization
operators and support vector kernels’, Neural Networks
vol.11, no. 4, pp. 637-649, 1998.
[13]. Wong, DWK, Liu, JH & Tan, NM 2009, ‘Intelligent
fusion of cup to disc ratio determination methods for
glaucoma detection in ARGALI’ , IEEE Annual International
Conference on Engineering in Medicine and Biology Society,
pp. 5777-5780.
[14]. Yuji Hatanaka, Atsushi Noudo, Chisako Muramatsu,
Akira Sawada, Takeshi Hara, Tetsuya Yamamoto & Hiroshi
Fujita 2012, ‘Vertical cup-to-disc ratio measurement for
diagnosis of glaucoma on fundus images’, Medical Imaging,
Proc. of SPIE, vol. 7624, pp. 76243C-(1-8).
[15]. Zhuo Zhang, Jiang Liu, Wing Kee Wong, Ngan, Meng
Tan, Joo Hwee Lim, Shijian Lu & Huiqi Li, 2009, ‘Convex
Hull Based Neuro-Retinal Optic Cup Ellipse Optimization in
Glaucoma Diagnosis’, 31st Annual International Conference
of the IEEE ,EMBS Minneapolis, Minnesota, USA, pp. 14411444.
12
All Rights Reserved © 2015 IJARTET