Fingerprint Compression Based on Sparse Representation

IJSART - Volume 1
Issue 4 –APRIL 2015
ISSN [ONLINE]: 2395-1052
Fingerprint Compression Based on
Sparse Representation
GavitJayesh1, JadhavUmesh2, SawantTushar3, Latkar Mahesh4
1, 2, 3, 4
Department of Computer Engineering
K.K.Wagh Institute of Engineering, Education & Reaserch
Abstract- Another unique finger impression pressure
calculation taking into account scanty representation is
resented. Getting an over complete
lexicon from an
arrangement of unique finger impression patches permits us to
speak to them as a scanty straight blend of word reference
iotas. In the calculation, we first build a lexicon for predefined
finger impression picture patches. For another given unique
mark pictures, speak to its fixes as per the lexicon by figuring
l0-minimization and after that quantize and encode the
representation. In this paper, we consider the impact of
different elements on pressure results. Three gatherings of
unique mark pictures are tried. The investigations exhibit that
our calculation is effective contrasted and a few contending
pressure systems (JPEG, JPEG 2000, and WSQ), particularly
at high pressure degrees. The tests additionally delineate that
the proposed calculation is earty to concentrate detail.
Keywords- Fingerprint, compression, sparse representation, JPEG
2000, JPEG, WSQ, PSNR.
I. INTRODUCTION
Recognition persons by method for biometric
attributes is an imperative innovation in the general public,
since biometric identifiers can't be imparted and they naturally
speak to the individual's substantial personality. Among
numerous biometric distinguishment advances, unique finger
impression distinguishment is exceptionally mainstream for
individual distinguishing proof because of the uniqueness, all
inclusiveness, collectability and invariance. Extensive
volumes of unique finger impression are gathered and put
away every day in an extensive variety of uses, including
crime scene investigation and access control. In 1995, the
measure of the FBI unique finger impression card chronicle
contained more than 200 million things and file size was
expanding at the rate of 30 000 to 50 000 new cards every day.
Vast volume of information devour the measure of memory.
Unique mark picture pressure is a key method to illuminate
the issue. By and large, pressure innovations can be classed
into lossless and lossy.
Lossless pressure permits the careful unique pictures
to be reproduced from the compacted nformation. Lossless
pressure advancements are utilized as a part of situations
where it is essential that the unique and the decompressed
information are indistinguishable. Maintaining a strategic
Page | 36
distance from bending confines their pressure productivity. At
the point when utilized in picture pressure where slight
bending is worthy, lossless pres0sure advancements are
frequently utilized in the yield coefficients of lossyressure.
Lossy pressure advancements typically change a picture into
an alternate area, quantize and encode its coefficients. Amid
the most recent three decades, change based picture pressure
advancements have been broadly scrutinized and a few
benchmarks have showed up. Two most basic choices of
change are the Discrete Cosine Transform (DCT) also, the
Discrete Wavelet Transform (DWT). The DCT-based encoder
can be considered pressure of a surge of 8 × 8 little square of
pictures. This change has been embraced in JPEG. The JPEG
pressure plan has numerous favorable circumstances, for
example, effortlessness, comprehensiveness and accessibility.
Be that as it may, it has an awful execution at low bit-rates
essentially in view of the basic square based DCT plan. Thus,
as ahead of schedule as 1995, the JPEG-council started to
build up another wavelet-based pressure standard for still
pictures, in particular JPEG 2000. The DWT-based
calculations incorporate three stages: a DWT processing of the
standardized picture, quantization of the DWT coefficients
and lossless coding of the quantized coefficients. The point of
interest can be found in and.Contrasted and JPEG, JPEG 2000
gives numerous highlights that bolster adaptable and
intelligent access to expansive measured picture. It
additionally permits extraction of diverse resolutions, pixel
devotions, districts of investment, segments and and so forth.
There are a few other DWT-based calculations, for example,
Set Apportioning in Hierarchical Trees (SPIHT) Algorithm .
The above calculations are for general picture
pressure. Focused at unique mark pictures, there are
exceptional pressure
calculations. The most widely
recognized is Wavelet Scalar Quantization (WSQ). It turned
into the FBI standard for the pressure of 500 dpi unique mark
pictures. Motivated by the WSQ calculation, a couple of
wavelet parcel based finger impression pressure plans have
been created. Notwithstanding WSQ, there are other
calculations for finger impression pressure, for example,
Contourlet Change (CT). These calculations have a typical
inadequacy, specifically, without the apacity of learning. The
finger impression pictures can't be compacted well at this
point. They won't be compacted well later. In this paper, a
novel methodology in light of inadequate representation is
www.ijsart.com
IJSART - Volume 1
Issue 4 –APRIL 2015
given. The proposed system has the capacity by overhauling
thword reference. . The particular procedure is as per the
following: build a base network whose segments speak to
highlights of the finger impression pictures, alluding the
framework lexicon whose segments are called particles; for a
given entire unique finger impression, separate it into little
squares called patches whose number of pixels are equivalent
to the measurement of the particles; utilize the strategy for
inadequate representation to acquire the coefficients; then,
quantize the coefficients; last, encode the coefficients and
other related data utilizing lossless coding techniques. In many
examples, the assessment of pressure execution of the
calculations is confined to Peak Signal to Noise Degree
(PSNR) reckoning. The consequences for real unique finger
impression coordinating or distinguishment are not researched.
In this paper, we will mull over it. In most Automatic
Fingerprint recognizable proof System (AFIS), the
fundamental highlight used to match two unique mark pictures
are particulars (edges endings and bifurcations). Subsequently,
the distinction of the particulars between preand post-pressure
is considered in the paper.
II. RELATED WORK
The field of scanty representation is moderately
youthful. Early indications of its center thoughts showed up in
a spearheading work. In that paper, the creators presented the
idea of lexicons also, set forward a percentage of the center
thoughts which later got to be crucial in the field, for example,
a voracious interest method. From that point, S. S. Chen, D.
Donoho and M. Saunders presented an alternate interest
method which utilized l 1-standard for scanty. It is astounding
that the best possible arrangement frequently could be gotten
by unraveling a raised programming assignment. Since the
two original works, specialists have contributed an awesome
arrangement in the field. The action in this field is spread over
different disciplines. There are as of now numerous effective
applications in different fields, for example, face
distinguishment, picture denoising , object location and superdetermination picture reproduction . In paper ,the creators
proposed a general characterization calculation for article
distinguishment in light of a meager representation registered
by l 1-minimization. On one hand, the calculation in light of
inadequate representation has a superior execution than
different calculations, for example, closest neighbor, closest
subspace also, direct SVM; then again, the new system gave
new bits of knowledge into face distinguishment: with sparsity
legitimately saddled, the decision of highlights gets to be less
vital than the quantity of highlights. Without a doubt, this
sensation is regular in the fields of inadequate representation.
It doesn't just exist in the face distinguishment, additionally
shows up in different circumstances. In paper taking into
Page | 37
ISSN [ONLINE]: 2395-1052
account inadequate and repetitive representations on overcomplete word reference, the creators composed a calculation
that could evacuate the zero-mean white and homogeneous
Gaussian added substance clamor from a given picture. In this
paper, we can see that the substance of the word reference is
of significance. The significance is epitomized in two
viewpoints. On one hand, the word reference ought to
effectively mirror the substance of the pictures; then again, the
word reference is sufficiently vast that the given picture can be
spoken to inadequately. These two focuses are completely
indispensable for the routines in view of scanty representation.
Meager representation has officially a few applications in
picture pressure . In paper , the analyses demonstrate that the
proposed calculation has great execution. Nonetheless, its
pressure effectiveness is reliably lower than JPEG 2000's. On
the off chance that more general characteristic pictures are
tried, this marvel will be more evident that the pressure
productivity is lower than the cutting edge pressure advances.
In this paper, we demonstrate the finger impression pictures
can be compacted better under an over-complete lexicon in the
event that it is legitimately built.In paper , the creators
proposed a calculation of finger impression pressure in view
of Nonnegative Matrix Factorization (NMF). In spite of the
fact that NMF has some fruitful applications, it likewise has
inadequacies. In some cases, non-antagonism is a bit much.
Case in point, in the picture pressure, what is considered is the
manner by which to lessen the contrast between preand postpressure as opposed to nonnegativity. Moreover, we think the
techniques in light of meager representation don't work
exceptionally well in the general picture pressure field. The
reasons are as per the following: the substance of the general
pictures are rich to the point that there is no legitimate lexicon
under which the given picture can be spoken to scantily;
regardless of the fact that there is one, the extent of the word
reference may be too huge to be processed successfully. Case
in point, the deformity, turn, interpretation and the clamor all
can make the lexicon gotten to be too substantial.
Accordingly, inadequate representation ought to be utilized in
exceptional picture pressure field in which there are no above
deficiencies. The field of unique finger impression picture
pressure is one of them.
III. THE MODEL AND ALGORITHMS OF
SPARSE REPRESENTATION
A. The Model of Sparse Representation
Given A = [a1, a2,..., A ] ∈ RM×N , any new test y ∈ RM×1, is
thought to be spoken to as a straight mix of few segments
from the word reference An, as indicated in recipe (1). This is
the main former information about the word reference in our
www.ijsart.com
IJSART - Volume 1
Issue 4 –APRIL 2015
ISSN [ONLINE]: 2395-1052
calculation. Later, we will see the property can be guaranteed
by building the lexicon legitimately.
y = Ax……… (1)
where y ∈RM×1, A ∈ RM×N and x = [x1, x2,..., xN ] T∈ RN×1.
Clearly, the framework y = Ax is underdetermined when M <
N. Accordingly, its answer is not one of a kind. As indicated
by the supposition, the representation is scanty. A legitimate
arrangement can be gotten by taking care of the accompanying
enhancement issue:
(l0) : min II xII0s.t. Ax = y ………..(2)
Arrangement of the advancement issue is relied upon to be
extremely scanty, in particular, IIx0II << N. The
documentation IIx0II checks the nonzero entrances in x.
Really it is not a standard. Then again, without uncertainty,
despite everything we call it l0-standard.
Actually, the pressure of y can be attained to by packing x. In
the first place, record the areas of its non-zero entrances and
their sizes. Second, quantize and encode the records. This is
the thing that we will do. Next, procedures for illuminating the
enhancement issue are given.
Fig. The behavior of xp for various values of p. As p tends to
zero, approaches the l-norm.
Algorithm-Fingerprint compression algorithm based on
sparse representation
i.
B. Meager Solution by Greedy Algorithm
ii.
Analysts' first believed is to tackle the streamlining issue l0
straightforwardly. The Matching Pursuit (MP) on account of
its effortlessness and proficiency is frequently used to pretty
nearly illuminate the l0 issue. Numerous variations of the
calculation are accessible, offering upgrades either in
exactness or/and in many-sided quality. Despite the fact that
the hypothetical investigation of these calculations is
troublesome, examinations demonstrate that they act well
when the quantity of non-zero passages is low.
iii.
C. Meager Solution by l1-Minimization
It is a characteristic thought that the enhancement issue (2) can
be approximated by tackling the accompanying enhancement
issue:
(lp) : min IIxIIps.t. Ax = y…………. (3)
where p > 0
Clearly, the littler p is, the closer the arrangements of the two
enhancement issues l,0 and l p are, as shown in Fig. 1. This is
because the magnitude of x is not important when p is very
small. What does matter is whether x is equal to 0 or not.
Therefore, p is theoretically chosen as small as possible.
However, the optimization problem (3) is not convex if 0 <p<
1. It makes p= 1 the most ideal situation, namely, the
following problems.
(l
Page | 38
1) :
min x 1
s.t. Ax = y…………... (4)
iv.
To construct a base matrix whose columns represent
features of the fingerprint images, referring the
matrix dictionary whose columns are called atoms
For a given whole fingerprint divide it into small
block called patches
For all patches no. of pixels is equal to the
dimensions of the atoms
To use the method of sparse representation:
 Obtain t he coefficients quantize coefficients
Encode the coefficients and other related information lossless
coding methods.
IV. OUR WORK
We create a dictionary which is neatly maintain and
fingerprint images are divided into patches.
A. Construct dictionary
The first method: choose fingerprint patches from
thetraining samples at random and arrange these patches as
columns of the dictionary matrix.The second method: in
general, patches from foreground of a fingerprint have an
orientation while the patchesfrom the background don’t have,
as shown in Fig. 2. This fact can be used to construct the
dictionary. Divide the interval [00, . . . ,1800] into equal-size
intervals. Each interval isrepresented by an orientation (the
middle value of eachinterval is chosen). Choose the same
number of patchesfor each interval and arrange them into the
dictionary.
www.ijsart.com
IJSART - Volume 1
Issue 4 –APRIL 2015
ISSN [ONLINE]: 2395-1052
first index and other indexes are coded by the same arithmetic
encoder. In the following experiments, the first coefficient is
quantized with 6 bits and other coefficients are quantized with
4 bits.
V. CONCLUSION
Fig. fingerprint divided into patches and store in sparse format
B. Compression of a Given Fingerprint
Given a new fingerprint, slice it into square patches
which have the same size with the training patches. The size
of the patches has a direct impact on the compression
efficiency.The algorithm becomes more efficient as the size
increases. However, the computation complexity and the size
of the dictionary also increase rapidly. The proper size should
be chosen. The size of the patches are defined as per the size
of database. In addition, to make the patches fit the dictionary
better, the mean of each patch needs to be calculated and
subtracted from the patch. After that, compute the sparse
representation for each patch by solving the l0 problem. Those
coefficients whose absolute values are less than a given
threshold are treated as zero. For each patch, four kinds of
information need to be recorded. They are the mean value, the
number about how many atoms to use, the coefficients and
their locations. The tests show that many image patches
require few coefficients. Consequently, compared with the use
of a fixed number of coefficients, the method reduces the
coding complexity andimproves the compression ratio.
Another pressure calculation adjusted to unique
finger impression pictures is presented. Notwithstanding the
straightforwardness of our proposed calculations, they contrast
positively and existing more modern calculations, particularly
at high pressure degrees. Due to the square by-piece preparing
system, on the other hand, the calculation has higher
complexities. The examinations demonstrate that the piece
impact of our calculation is less genuine than that of JPEG.
We consider the impact of three separate word references on
unique finger impression pressure. The examinations mirror
that the lexicon got by the K-SVD calculation works best.
Also, the bigger the quantity of the preparing set is, the better
the pressure result is. One of the principle troubles in creating
pressure calculations for fingerprints lives in the requirement
for protecting the particulars which are utilized as a part of the
ID. The examinations demonstrate that our calculation can
hold a large portion of the particulars vigorously amid the
pressure and remaking. There are numerous charming
inquiries that future work ought to consider. To begin with,
the highlights and the techniques for developing word
references ought to be thoroughly considered. Furthermore,
the preparation tests ought to incorporate fingerprints with
diverse quality ("great", "awful", "monstrous"). Thirdly, the
improvement calculations for unraveling the meager
representation need to be examined. Fourthly, streamline the
code to diminish unpredictability of our proposed technique.
At long last, different applications in light of meager
representation for unique mark pictures ought to be
investigated.
ACKNOWLEDGMENT
C. Coding and Quantization
Entropy coding of the atom number of each patch,
the mean value of each patch, the coefficients and the indexes
is carried out by static arithmetic coders. The atom number of
each patch is separately coded. The mean value of each patch
is also separately coded. The quantization of coefficients is
performed using the Lloyd algorithm, learnt off-line from the
coefficients which are obtained from the training set by the
MP algorithm over the dictionary. The first coefficient of each
block is quantized with a larger number of bits than other
coefficients and entropy-coded using a separate arithmetic
coder. The model for the indexes is estimated by using the
source statistics obtained off-line from the training set. The
Page | 39
The authors would like to thank the Associate Editor
and the anonymous reviewers for their valuable comments and
suggestions that helped greatly improve the quality of this
paper.
REFERENCES
[1] D. Maltoni, D. Miao, A. K. Jain, and S. Prabhakar,
Handbook ofFingerprint Recognition, 2nd ed. London,
U.K.: Springer-Verlag, 2009.
[2] N. Ahmed, T. Natarajan, and K. R. Rao, “Discrete cosine
transform,”IEEE Trans. Comput., vol. C-23, no. 1, pp.
90–93, Jan. 1974.
www.ijsart.com
IJSART - Volume 1
Issue 4 –APRIL 2015
[3] C. S. Burrus, R. A. Gopinath, and H. Guo, Introduction to
Wavelets andWavelet Transforms: A Primer. Upper
Saddle River, NJ, USA: Prentice-Hall, 1998.
[4] W. Pennebaker and J. Mitchell, JPEG—Still Image
Compression Standard.New York, NY, USA: Van
Nostrand Reinhold, 1993.
[5] M. W. Marcellin, M. J. Gormish, A. Bilgin, and M. P.
Boliek,“An overview of JPEG-2000,” in Proc. IEEE Data
Compress. Conf.,Mar. 2000, pp. 523–541.
[6] A. Skodras, C. Christopoulos, and T. Ebrahimi, “The
JPEG 2000 stillimage compression standard,” IEEE
Signal Process. Mag., vol. 11, no. 5,pp. 36–58, Sep. 2001.
[7] T. Hopper, C. Brislawn, and J. Bradley, “WSQ gray-scale
fingerprintimage compression specification,” Federal
Bureau of Investigation,Criminal Justice Information
Services, Washington, DC, USA, Tech.Rep. IAFIS-IC0110-V2, Feb. 1993.
[8] C. M. Brislawn, J. N. Bradley, R. J. Onyshczak, and T.
Hopper, “FBIcompression standard for digitized
fingerprint images,” Proc. SPIE,vol. 2847, pp. 344–355,
Aug. 1996.
[9] A. Said and W. A. Pearlman, “A new, fast, and efficient
image codecbased on set partitioning in hierarchical
trees,” IEEE Trans. CircuitsSyst. Video Technol., vol. 6,
no. 3, pp. 243–250, Jun. 1996.
[10] R. Sudhakar, R. Karthiga, and S. Jayaraman, “Fingerprint
compressionusing contourlet transform with modified
SPIHT algorithm,” IJECEIranian J. Electr. Comput.
Eng., vol. 5, no. 1, pp. 3–10, 2005.
[11] S. G. Mallat and Z. Zhang, “Matching pursuits with
timefrequencydictionaries,” IEEE Trans. Signal Process.,
vol. 41, no. 12,pp. 3397–3415, Dec. 1993.
[12] S. S. Chen, D. Donoho, and M. Saunders, “Atomic
decomposition bybasis pursuit,” SIAM Rev., vol. 43, no.
1, pp. 129–159, 2001.
[13] J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y.
Ma, “Robust facerecognition via sparse representation,”
IEEE Trans. Pattern Anal. Mach.Intell., vol. 31, no. 2, pp.
210–227, Feb. 2009.
[14] M. Elad and M. Aharon, “Image denoising via sparse and
redundantrepresentation over learned dictionaries,” IEEE
Trans. Image Process.,vol. 15, no. 12, pp. 3736–3745,
Dec. 2006.
[15] S. Agarwal and D. Roth, “Learning a sparse
representation for objectdetection,” in Proc. Eur. Conf.
Comput. Vis., 2002, pp. 113–127.
[16] J. Yang, J. Wright, T. Huang, and Y. Ma, “Image superresolutionas sparse representation of raw image patches,”
in Proc. IEEE Conf.Comput. Vis. Pattern Recognit., Jun.
2008, pp. 1–8.
[17] K. Skretting and K. Engan, “Image compression using
learned dictionariesby RLS-DLA and compared with KSVD,” in Proc. IEEE ICASSP,May 2011, pp. 1517–1520.
[18] O. Bryt and M. Elad, “Compression of facial images
using theK-SVD algorithm,” J. Vis. Commun. Image
Represent., vol. 19, no. 4,pp. 270–282, 2008.
Page | 40
ISSN [ONLINE]: 2395-1052
[19] Y. Y. Zhou, T. D. Guo, and M. Wu, “Fingerprint image
compressionalgorithm based on matrix optimization,” in
Proc. 6th Int. Conf. DigitalContent, Multimedia Technol.
Appl., 2010, pp. 14–19.
[20] A. J. Ferreira and M. A. T. Figueiredo, “Class-adapted
image compressionusing independent component
analysis,” in Proc. Int. Conf. ImageProcess., vol. 1. 2003,
pp. 625–628.
[21] A. J. Ferreira and M. A. T. Figueiredo, “On the use of
independentcomponent analysis for image compression,”
Signal Process., ImageCommun., vol. 21, no. 5, pp. 378–
389, 2006.
[22] P. Paatero and U. Tapper, “Positive matrix factorization:
A nonnegativefactor model with optimal utilization of
error estimates of data values,”Environmetrics, vol. 5, no.
1, pp. 111–126, 1994.
[23] D. D. Leeand and H. S. Seung, “Learning the parts of
objects by nonnegativematrix factorization,” Nature, vol.
401, pp. 799–791, Oct. 1999.
www.ijsart.com