Full Paper

ISSN 2394-3777 (Print)
ISSN 2394-3785 (Online)
Available online at www.ijartet.com
International Journal of Advanced Research Trends in Engineering and Technology (IJARTET)
Vol. II, Special Issue XXIII, March 2015 in association with
FRANCIS XAVIER ENGINEERING COLLEGE, TIRUNELVELI
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
INTERNATIONAL CONFERENCE ON RECENT ADVANCES IN COMMUNICATION SYSTEMS AND
TECHNOLOGIES
(ICRACST’15)
TH
25 MARCH 2015
Multiscale Image Fusion Using The Curvelet Transform And Non
Orthogonal Filter Bank
Babisha B.R1
1.PG scholar-Department of ECE
St. Joseph’s college of Engineering
Chennai-600119, India
R. Vijayarajan2
2.Associate Professor-Department of ECE
St. Joseph’s college of Enginnering
Chennai-600119, India
Abstract---The Image fusion is a data fusion technology
which keeps images as main research contents. It refers to
the techniques that integrate multi-images of the same
scene from multiple image sensor data or integrate multi
images of the same scene at different times from one image
sensor. The image fusion algorithm based on Wavelet
Transform which developed faster was a multi-resolution
analysis. Wavelet Transform has good time-frequency
characteristics. Nevertheless, its excellent characteristic in
one-dimension can’t be extended to two dimensions or
multi-dimension simply as it has limited directivity. This
project introduces the Curvelet Transform and uses it to
fuse images. In this project we put forward an image
fusion algorithm based on Low and high frequency
coefficients and they are chosen according to different
frequency domain after the Curvelet Transform. In
choosing the low-frequency coefficients, the concept of
local area variance was chosen for measuring criteria. In
choosing the high frequency coefficients, the window
property and local characteristics of pixels were analyzed.
Finally, the proposed algorithm was applied to
experiments of multi-focus images and multimodal images
which are useful in medical field.The experiments show
that the method could extract useful information from
source images to fused images so that clear images are
obtained.
sensor, the image fusion of the sensors withdifferent types, and
the fusion of image and non-image. They find application in
various fields such as remote sensing, medical imaging and
military appliances. Depending upon the type of fusion it is
classified aspixel-level fusion, feature-level fusion anddecision
level fusion. They usedifferent fusion algorithms and find
application in various fields. There are few classicalFusion
algorithms such as computing the average pixel-pixelgray
level value of the source images, Laplacian pyramid, Contrast
pyramid, Ratio pyramid and Discrete Wavelet Transform
(DWT). However, computing theaverage pixel-pixel gray level
value of the source imageshave undesirable side effects such
as contrastreduction. Wavelet-based image fusion method
provideshigh spectral quality of the fused images but they lack
spatialinformation as it is animportant factor as much as the
spectral information. In particular, thisimproves the efficiency
of the image fusion application. Hence, itis necessary to
develop advanced image fusion method sothat the fused
images have the same spectral resolution and the same spatial
resolution with minimum artefacts. The principle behind DWT
is to perform decompositionson each source image, and then
combine all thesedecompositions to obtain composite
representation, and then inverse transform is used to obtain the
final fused image which is found to be effective. However, one
of the most important properties ofwavelets transform can only
reflect "through" edgecharacteristics, but cannot express
"along" edgecharacteristics. At the same time, the wavelet
transformscannot precisely show the edge direction since it
adoptsisotropy.In order to overcome the limitations of wavelet
transform, the concept ofCurvelet transform was proposed,
which uses edges as basic elements, and can adapt well to the
imagecharacteristics. Moreover, Curvelet Transform has the
advantages of goodanisotropy and has better direction, can
provide
moreinformation
to
image
processing.
Curvelettransform can represent appropriately the edge of
image andsmoothness area in the same precision of inverse
Keywords----Fused image, Fusion rule, Curvelet transform,
Non orthogonal filter bank.
I. INTRODUCTION
With the advancement in the field of sensing
technology,we obtain images in more possible ways, and
theimage fusion types are developed, such as theImage fusion
of same sensor, the multi-spectral imagefusion of single-
50
All Rights Reserved © 2015 IJARTET
transform.Image fusion is a usefultechnique for merging
similar sensor and multi-sensorimages to enhance the
information content present in the images. Multimodal images
play a major role in medical field. Positron Emission
Tomography (PET) and Magnetic Resonance (MR) are the
most important modalities in Medical Imaging. In brain
medical imaging, MR image provides high-resolution
anatomical Information in gray intensity, while PET image
reveals the biochemical changes in color without anatomical
information. These two types of images contain important
complementary information to which doctors need to refer so
that a brain disease can be diagnosed accurately and
effectively.
use the Laplacian Pyramid. The LP decomposition at each
level generates a down sampled low pass version of the
original and the difference between the original and the
prediction could be calculated. The difference is the prediction
error. The process can be iterated on the coarse (down
sampled lowpass) signal
III. PROPOSED MODEL
The curvelet transform, with the character of
anisotropy, wasdeveloped from the wavelet transform to
overcome the limitation of wavelet transform to remove
unwanted noise from the image while preserving information
along the edges.
II. RELATED WORKS
The paper in [1], two medical images are fused based
on the Wavelet Transform (WT) and Curvelet transform using
different fusion techniques. The objective of the fusion of an
MR image and CT image of the same organ is to obtain a
single image containing as much information as possible
about that organ for diagnosis .In this paper ,the input CT and
MR images are registered and wavelet and curvelet
transforms are appliedon it.
In this paper [2], they present a PET and MR brain
image fusion method based on wavelet transform for low- and
high-activity brain image regions, respectively. This method
can generate very good fusion result by adjusting the
anatomical structural information in the gray matter (GM)
area, and then patching the spectral information in the white
matter (WM) area after the wavelet decomposition and graylevel fusion. They used normal axial, normal coronal, and
Alzheimer’s disease brain images as the three datasets for
testing and comparison.
The paper in [3] proposes the task of enhancing the
perception of a scene by combining information captured by
different sensors is usually known as image fusion. The
pyramid decomposition and the Dual-Tree Wavelet Transform
have been thoroughly applied in image fusion as analysis and
synthesis tools. Using a number of pixel-based and regionbased fusion rules, one can combine the important features of
the input images in the transform domain to compose an
enhanced image. In this paper, the authors test the efficiency
of a transform constructed using Independent Component
Analysis (ICA) and Topographic Independent Component
Analysis bases in image fusion. The bases are obtained by
offline training with images of similar context to the observed
scene. The images are fused in the transform domain using
novel pixel-based or region-based rules. The proposed
schemes feature improved performance compared to
traditional wavelet approaches with slightly increased
computational complexity.
This paper [4] presents the Laplacian Pyramid
method is proposed. The combination of a Laplacian Pyramid
and a Directional Filter Bank is a double filter bank structure.
In Curvelet Transform, the multi-scale decomposition is done
firstly. One way to obtain a multi-scale decomposition is to
Subband
Decomposition
Smooth
partitioning
ReNormalization
Ridgelet
Analysis
Figure 1: Steps in Curvelet Transform
The Curvelet transform involves the steps shown in Figure 1,
as it is used to enhance the images. The inverse curvelet
transform is also applied using these steps to obtain the
reconstructed image. There is also procedural definition of
thereconstruction algorithm.Basicly, inverse the procedure of
curvelet transform with some mathematical revising:
gQ =
A. Non Orthogonal Filter Bank
When filters on analysis sides and synthesis side are same then
these filter are called as non orthogonal filter bank ie, same
wavelet function with same scaling parameter. The analysis
and synthesis filters are shown in Figure 2.
51
All Rights Reserved © 2015 IJARTET
Figure2 : Non orthogonal filter bank
B. Block Diagram
The below block diagram in Figure 3 shows that two
images from different modalities are chosen and it is combined
using curvelet transform which is the best among transforms
suitable for medical images. Decision mapping fusion rule is
used to differentiate the low and high frequency components
and fuses the coefficients. Now, inverse curvelet transform is
applied to obtain the original reconstructed image.
Source Image
1 (MRI)
Curvelet
Transform
Decission
Mapping
Rule
Double
Conversion
Double
Conversion
Apply CT
Apply CT
Display
Coefficients
in Level 1,2,3
Display
Coefficients
in Level 1,2,3
Image Fusion Using Low Subband and High Subband
Values
Fused Image
Source Image
2(PET)
Figure 4: Flow Diagram
Inverse
Curvelet
Transform
IV. PROPOSED ALGORITHM AND FUSION RULE



Figure 3: Block Diagram of Image Fusion
C. Proposed Flow Diagram

The curvelet transform is a multiscale directional
transform that allows a non optimal sparse representation of
objects with edges.Most natural images/signals exhibit linelike edges, i.e., discontinuities across curves (so- called line or
curve singularities). On comparing the curvelet system with
the conventional Fourier and wavelet analysis, the short-time
Fourier transform uses a shape-fixed rectangle in frequency
domain, and conventional wavelets use shape-changing
(dilated) but area fixed windows. By contrast, the
curvelettransform uses angled polar wedges or angled
trapezoid windows in frequency domain to resolve directional
features.The flow diagram in Figure 4 involves all the steps
involved in fusing two medical images. First two input images
of different modalities and size is applied to double conversion
since it involves matrix manipulation. Then curvelet transform
is applied and it is decomposed into three levels. Now, image
fusion is carried out using the proposed fusion rule and the
images are combined.

Input
Image 1
Resize
image
Input
Image 1
the
Resize
image
Read the input images (MRI & PET Scanned).
Resample and register both these images.
Apply transform to these images which decompose it
into four sub-bands (LL, LH, HL and HH).
The Wavelet coefficients obtained from both the
images are fused using the rules for fusion.
The final fused image is reconstructed by applying
inverse transform to fused image
A. Image Fusion Rule
In general various fusion rules are proposed for a wide
variety of applications. The fusion rule includes two parts,
activity-level measurement method and coefficient combining
method. “Decision Mapping Rule” is the proposed fusion rule
in the MRI and PET image fusion. In this fusion rule, the
activity-level measurement is not used while the coefficient
combining method is simply the “substitution”.In Decision
mapping rule low pass sub band and high pass sub band are
used to extract the coefficients.
DecisionMapping = (abs(Ahigh)>=abs(Bhigh));
Fused{l}{d}=DecisionMapping*Ahigh+
(~DecisionMapping)*Bhigh
V. SOFTWARE & ITS DESCRIPTION
the
A) Matlab Tool
All Rights Reserved © 2015 IJARTET
52
Image Processing Toolbox provides a comprehensive set
of standard algorithms, applications & functions for image
processing analysis, visualization, and algorithm for
development purposes. We can perform image analysis,
segmentation,
enhancement,
denoising,
geometric
transformations, and registration for various images. Toolbox
functions support multi core processors, GPUs, and C-code
generation. Image Processing Toolbox supports a diverse set
of image types,giga pixel resolution, embedded ICC profile,
and tomographic images. Visualization functions and
applications let us explore various images and videos, that
examine a region of pixels, adjust color and contrast, create
contours or histograms, and manipulate regions of interest
(ROIs). The toolbox supports workflows for displaying,
processing and navigating large images.
a)
b)
VI. Results & Discussions
Image fusion using Curvelet Transform is applied to many
MRI images and PET scan images and the results are shown in
the following figures. The figure shows the original image and
the fused images using curvelet transform which are enhanced
using non orthogonal filter bank. And the parameters used to
find the best method are calculated. The proposed method is
compared with the conventional discrete wavelet transform
and the results shows that the proposed method is very
efficient in combining the medical images for diagnosis.
c)
Figure 6: Set 2 input images and fused image
.
a)
b)
b)
a)
c)
c)
Figure.5: Set 1 input images and fused image
Figure 7: Set 3 input images and fused image
53
All Rights Reserved © 2015 IJARTET
The input images applied to the curvelet transform are the MRI
and PET images which are shown in Figures. These two
images are fused after undergoing decomposition and fusion
rule. The final fused image is shown in Figures which are
used by the radiologist for further processing.
.
50
40
PSNR
30
MSE
20
A. Evaluation Parameters
10
1) MSE
The mean squared error (MSE) of an estimator measures
the average of the squares of the "errors", that is, the
difference between the estimator and what is estimated.
Mutual
information
0
set 1
set 2 set 3
Figure 5.7: Performance Evaluation Chart
MSE=MSE/ (m*n);
Where m → Number of rows
n →Number of columns
2) PSNR
PSNR is the ratio between the maximum possible power
of a signal and the power of corrupting noise that affects the
fidelity of its representation. PSNR is an approximation to
human perception of reconstruction quality. Although a higher
PSNR generally indicates that the reconstruction is of higher
quality, in some cases it may not. PSNR is most easily defined
via the mean squared error (MSE).
PSNR(dB)
MSE
Mutual
Information
Set 1
28.0530
5.471
16.8294
Set 2
44.8203
2.8749
21.6712
Set 3
15.0371
9.0159
11.3243
DWT
(Existing
Method)
1.3250
88.22
4.5143
Psnr = xx*log10((255^2)/MSE)
where xx → Scaling Factor
3) Mutual information
Mutual
information
measures
the
information
that X and Y share: it measures how much knowing one of
these variables reduces uncertainty about the other. For
example, if X and Y are independent, then knowing X does not
give any information about Y and vice versa, so their mutual
information is zero. At the other extreme, if X is a
deterministic function of Y and Y is a deterministic function
of X then all information conveyed by X is shared with Y:
knowing X determines the value of Y and vice versa. As a
result, in this case the mutual information is the same as the
uncertainty
contained
in Y (or X)
alone,
namely
the entropy of Y (or X). Moreover, this mutual information is
the same as the entropy of X and as the entropy of Y. (A very
special case of this is when X and Y are the same random
variable.)
Table 1: Performance Comparison of Fused Images
From the Table1, it is clear that again the proposed algorithm
produce quality fusion image than the existing algorithms.
This is evident from the high PSNR values obtained.
According to the results, the proposed algorithm is best suited
for medical images as it produces a high PSNR of 44.8203dB.
VII. Conclusion
The purpose of image fusion is to reduce uncertainty and
minimize redundancy in the output while maximizing relevant
information particular to an application. So in this project, a
method is proposed for medical image fusion using curvelet
transform which not only has characteristics of Multi
resolution locality and detecting curves and edges. This is
found useful in extracting all the information present in the
medical images which in the future is used to diagnose the
disease.
REFERENCES
[1]
Andreas Ellmauthaler, Student Member, IEEE, Carla L. Pagliari, Senior
Member, IEEE, and Eduardo A. B. Da Silva, Senior Member,(2013)
54
All Rights Reserved © 2015 IJARTET
IEEE, Multiscale Image Fusion Using the Undecimated Wavelet
Transform With Spectral Factorization and Nonorthogonal Filter
Banks.” IEEE Transactions On Image Processing, Vol. 22, No. 3.
[2] A. Cardinali and G. P. Nason, (2005), A statistical multiscale approach
to image segmentation and fusion, in Proc. Int. Conf. Information
Fusion, Philadelphia, PA, USA, pp. 475–482.
[3] C.H. Chan, C.I. Chen, P.W. Huang, P.L. Lin and T.T. Tsai,(2011)
“Brain medical image fusion based on IHS and Log-Gabor with suitable
decompostion scale and orientation for different regions,” IADIS Multi
Conference on Computer Science and Information Systems 2011
CGVCVIP, Rome, Italy.
[4] C. Chang, L. Liu, N. Lin, H.Zang,(2007) “A novel wavelet medical
image fusion method,” International Conference on Multimedia and
Ubiquitous Engineering, pp. 548-553.
[5] Y. Chibani and A. Houacine,(2003) “Redundant versus orthogonal
wavelet decomposition for multisensor image fusion,” In Pattern
Recognition, vol.36, pp.879-887.
[6] D. L. Cruz, G. Pajares,(2004) “A wavelet-based images fusion tutorial,”
In Pattern Recognition, vol. 37, pp. 1855-1872.
[7] S. Daneshvar and H. Ghassemian,(2014) “MRI and PET image fusion
by combining IHS and retina-inspired models,” In Information Fusion,
vol.11, pp.114-123, 2010.2014 IEEE International Symposium on
Bioelectronics and Bioinformatics (IEEE ISBB).
[8]
A. A. Goshtasby and S. Nikolov,(2007) “Image fusion: Advances in the
state of the art,” Inf. Fusion, vol. 8, no. 2, pp. 114–118.
[9] Y. Guan, H. Sun, and A. Wang,(2006) “The application of wavelet
transform to multimodality medical image fusion,” Proceedings of IEEE
International Conference on Networking, Sensing and Control, pp. 270274.
[10] P.S. Huang, H. Shyu, T. Tu, S. Su,(2001)“A new look at IHS-like image
fusion methods,” In Information Fusion, vol. 2, no 3, pp. 177-186(10).
55
All Rights Reserved © 2015 IJARTET