Review of Kohonen-SOM and K-Means data mining Clustering

Review of Kohonen-SOM and K-Means data mining Clustering
algorithms based on Academic Data Set
Er. Gurpreet Singh
Associate Professor & Head, Department of Computer Sc.& Engg.
SSIET, Jalandhar
Abstract : With the development of information
technology and computer science, high-capacity data
appear in our lives. In order to help people analyzing
and digging out useful information, the generation and
application of data mining technology seem so
significance. Clustering is the mostly used method of data
mining. Clustering can be used for describing and
analyzing of data. In this paper, the approach of
Kohonen SOM and K-Means and HAC are discussed.
After comparing these three methods effectively and
reflect data characters and potential rules syllabify. This
work will present new and improved results from largescale datasets.
us to visualize in a low dimensional [2] representation
space (2D) the original dataset. Indeed, the individuals
located in adjacent cells are more similar than individuals
located in distant cells. In this point of view, it is
comparable to visualization techniques such as
Multidimensional scaling or PCA (Principal Component
Analysis).
Through this, it can be showed how to implement the
Kohonen's SOM algorithm with a particular tool. After
implementation it has been tried to assess the properties
of this approach by comparing the results with those of
the PCA algorithm. Then, compare the results to those
of K-Means, which is a clustering algorithm. Finally
Keywords : Kohonen- SOM, K-means, HAC, PCA implement [3] the Two-step Clustering process by
combining the SOM algorithm with the HAC process
1. Introduction
(Hierarchical Agglomerative Clustering). It is a variant
of the Two-Step clustering where combine K-Means
A self-organizing map (SOM) or self-organizing feature and HAC.
map (SOFM) is a kind of artificial neural network that
[1] is trained using unsupervised learning to produce a The Kohonen algorithm is a very powerful tool for data
low-dimensional (typically two dimensional), discretized analysis. It was originally designed to model organized
representation of the input space of the training samples, connections between some biological neural networks.
called a map. Self-organizing maps are different than It was also immediately considered as a very good
other artificial neural networks in the sense that they use algorithm to realize vectorial quantization, and at the same
a neighborhood function to preserve the topological time pertinent classification, with nice properties for
visualization. If the individuals are described by
properties of the input space.
quantitative variables (ratios, frequencies, measurements,
SOM is a clustering method. Indeed, it organizes the amounts, etc.), the straightforward application of the
data in clusters (cells of map) such as the instances in the original algorithm leads to build code vectors and to
same cell are similar, and the instances in different cells associate to each of them the class of all the individuals
are different. In this point of view, SOM gives which are more similar to this code-vector than to the
comparable results to state-of-the-art clustering algorithm others. But, in case of individuals described by categorical
such as K-Means.
(qualitative) variables having a finite number of modalities
(like in a survey), it is necessary to define a specific
SOM is also used as a visualization technique. It allows
An International Journal of COMPUTER & IT
20
algorithm. In this paper, we present a new algorithm
inspired by the SOM algorithm, which provides a
simultaneous classification of the individuals and of their
modalities.
2. Related Work
According to Ji Dan, Qiu Jianlin [14], with the
development of information technology and computer
science, high-capacity data appear in our lives . In order
to help people analyzing and digging out useful
information, the generation and application of data mining
technology seem so significance. Clustering and decision
tree are the mostly used methods of data mining.
Clustering can be used for describing and decision tree
can be applied to analyzing. After combining these two
methods effectively, it can reflect data characters and
potential rules syllabify. This paper presents a new
synthesized data mining algorithm named CA which
improves the original methods of CURE and C4.5. CA
introduces principle component analysis (PCA), grid
partition and parallel processing which can achieve feature
reduction and scale reduction for large-scale datasets.
This paper applies CA algorithm to maize seed breeding
and the results of experiments show that our approach
is better than original methods.
According to Timothy C. Havens et James C. Bezdek
[15], very large (VL) data or "Big Data" are any data
that you cannot load into your computer's working
memory. This is not an objective definition, but a
definition that is easy to understand and one that is
practical, because there is a data set too big for any
computer you might use; hence, this is VL data for you.
Clustering is one of the primary tasks used in the pattern
recognition and data mining communities to search VL
databases (including VL images) in various applications,
and so, clustering algorithms that scale well to VL data
are important and useful. This article compares the
efficacy of three different implementations of techniques
aimed at extending fuzzy c-means (FCM) clustering to
VL data. Specifically, we compare methods based on
(i) sampling followed by non-iterative extension; (ii)
incremental techniques that make one sequential pass
through subsets of the data; and (iii) kernelized versions
of FCM that provide approximations based on sampling,
including three proposed algorithms. it will use both
loadable and VL data sets to conduct the numerical
experiments that facilitate comparisons based on time
and space complexity, speed, quality of approximations
to batch FCM (for loadable data), and assessment of
matches between partitions and ground-truth. Empirical
results show that random sampling plus extension FCM,
bit-reduced FCM, and approximate kernel FCM are
good choices for approximating FCM for VL data. it
conclude by demonstrating the VL algorithms on a data
set with 5 billion objects and presenting a set of
recommendations regarding the use of the different VL
FCM clustering schemes.
3. Methodology
3.1 Kohonen-SOM's approach
Kohonen's SOM is called a topology-preserving map
because there is a topological structure imposed on the
nodes in the network. A topological map is simply a
mapping that preserves neighbourhood relations.
Algorithm for Kohonen's Self Organizing Map
"
Assume output nodes are connected in an array
(usually 1 or 2 dimensional)
"
Assume that the network is fully connected - all
nodes in input layer are connected to all nodes in
output layer.
"
Use the competitive learning algorithm as follows:
1. Randomly choose an input vector x
2. Determine the "winning" output node i, where wi
is the weight vector connecting the inputs to output
node i.
Note: the above equation is equivalent to
wi x >= wk x only if the weights are normalized.
An International Journal of COMPUTER & IT
|
i-X|
|
K-X|
Table 1: Attributes of Data Set
k
Given the winning node i, the weight update is
ωk(new) = ωk(old) + ωk(n)
where ωk(n) represesnts the change in weight.
3.2
K-Means's approach
The K-Means algorithm clusters data by trying to
separate samples in n groups of equal variance, minimizing
a criterion known as the 'inertia' of the groups. This
algorithm requires the number of cluster to be specified.
It scales well to large number of samples and has been
used across a large range of application areas in many
different fields. It is also equivalent to the expectationmaximization algorithm when setting the covariance
matrix to be diagonal, equal and small. The K-means
algorithm aims to choose centroids C that minimize the
within cluster sum of squares objective function with a
dataset X with n samples:
J(X,C) =
21
||Xj-µi||2) where, µiεC
K-means is often referred to as Lloyd's algorithm. In
basic terms, the algorithm has three steps. The first step
chooses the initial centroids, with the most basic method
being to choose K samples from the datasetX . After
initialization, k-means consists of looping between the
other two major steps. The first step assigns each sample
to its nearest centroid. The second step creates new
centroids by taking the mean value of all of the samples
assigned to each previous centroid. The difference between the old and the new centroids is the inertia and
the algorithm repeats these last two steps until this value
is less than a threshold.
QOS
QOIP
HOCC
QOIP
RM
EVA
QP
CACP
RPP
CA
WA
BP
DM
IWS
EMTE
AWA
LA
FDP
QUAL
Grading
Qualilty of Syllabus
Quality of Instruction Plan
Handling of committees clubs
Qualilty of Instruction Plans
Record Maintenance
Evaluation
Question Paper
CA of capstone project
Research Paper published
Confrence atttended paper published
Workshop attended
Book Computers published
Discipline Maintenance
Interactivity with students
Evaluation of MTE and ETE
Award
Leadership ability
Faculty Development program
Qualification
Grading
The easiest way to import a XLS file in Tanagra is to
create a new file in Tanagra and import the XLS file.
4. Result & Discussion
4.1 Data Set
I am using "faculty activities" dataset it has real values
there are approximately 20 descriptors and 3303 instances. Some of them will be used as an input attributes
and other are used as an output attributes.
Figure 1. Implemented Kohonen SOM Algorithm and
generate MAP Topology with 6 clusters, MAP Quality
0.8189
An International Journal of COMPUTER & IT
22
Figure 5. The K-Means algorithm shows 6 different
clusters with different Conclusion
Figure 2. Implemented PCA on Kohonen SOM
and evaluate Eigen Values
4.2
Result
After implementation of these algorithms on Academic
Activities data set, the following results obtained:
Table 2: Comparative results of both algorithms
Parameters
Figure 3. Now Showing 6 different clusters with
same scale.
Kohonen SOM K-MEANS
No. of clusters
6
6
MAP TOLOPLOGY
6
6
ERROR RATE
0.8189
0.8456
COMPUTAION
297 MS
1281 MS
FAST
SLOW
TIME
ACESSING TIME
5. Conclusion
In this comparative study found that Kohonen SOM
gives the better performance as compare to K-Means
with minimum error rate or high accuracy,
minimum computation time on same data set and
parameters.
Figure 4. The KOHONEN-SOM component shows
6 different clusters with different scale
An International Journal of COMPUTER & IT
6. References
23
[9] S.Lallich, O.Teytaud, Évaluation et validation de
l'intérêt des règles d'association
[1] Rakesh Agrawal , Ramakrishnan Srikant, Fast
Algorithms for Mining Association Rules in Large [10] Osada, R., Funkhouser, T., Chazelle, B. et
Dobkin, D. (( Matching 3D Models with Shape
Databases, Proceedings of the 20th International
Distributions )). Dans Proceedings of the
Conference on Very Large Data Bases, p.487International Conference on Shape Modeling &
499, September 12-15, 1994
Applications (SMI '01), pages 154-168. IEEE
[2] A. Baskurt, F. Blum, M. Daoudi, J.L. Dugelay, F.
Computer Society,Washington, DC, Etat-Unis.
Dupont, A. Dutartre, T. Filali Ansary, F. Fratani,
2001.
E. Garcia, G. Lavoué, D. Lichau, F. Preteux, J.
Ricard, B. Savage, J.P. Vandeborre, T. Zaharia. [11] W.Y. Kim et Y.S. Kim. A region-based shape
descriptor using Zernike moments. Signal
SEMANTIC-3D : COMPRESSION,
Processing : Image Communication, 16 :95-100,
INDEXATION ET TATOUAGE DE DONNÉES
2000.
3D Réseau Natio nal de Recherche en
Télécommunications (RNRT) (2002)
[12] A. Khotanzad et Y.H. Hong. Invariant image
recognitio n by Zernike moments. IEEE
[3] T.Zaharia F.Prêteux, Descripteurs de forme : Etude
Transactions on Pattern Analysis and Machine
comparée des approches 3D et 2D/3D 3D versus
Intelligence, 12(5), May 90.
2D/3D Shape Descriptors: A Comparative study
[4] T.F.Ansary J.P.Vandeborre M.Daoudi, Recherche [13] N Pasquier, Y Bastide, R Taouil, L Lakhal Database Theory ICDT'99, 1999 - Springer
de modèles 3D de pièces mécaniques basée sur
les moments de Zernike
[14] Ji Dan, Qiu Jianlin et Gu Xiang, Chen Li, He Peng.
(2010) A Synthesized Data Mining Algorithm
[5] A. Khothanzad, Y. H. Hong, Invariant image
based on Clustering and Decision tree. 10th IEEE
recognition by Zernike moments, IEEE Trans.
International Conference on Computer and
Pattern Anal. Match. Intell.,12 (5), 489-497, 1990.
Information Technology (CIT 2010)
[6] Agrawal R., Imielinski T., Swani A. (1993) Mining
Association rules between sets of items in large [15] Timothy C. Havens et James C. Bezdek. Fuzzy cMeans Algorithms for Very Large Data
databases. In : Proceedings of the ACMSIGMOD
Conference on Management of Data, Washington
DC, USA.
[7] Agrawal R., Srikant R., Fast algorithms for mining
association rules in larges databases. In Proceeding
of the 20th international conference on Very Large
Dada Bases (VLDB'94),pages 478-499. Morgan
Kaufmann, September 1994.
[8] U. Fayyad, G.Piatetsky-Shapiro, and Padhraic
Smyth, From Data Mining toKnowledge Discovery
in Databases, American Association for Artificial
Intelligence. All rights reserved. 0738-4602-1996
vvvv