Full Paper

ISSN 2394-3777 (Print)
ISSN 2394-3785 (Online)
Available online at www.ijartet.com
International Journal of Advanced Research Trends in Engineering and Technology (IJARTET)
Vol. II, Special Issue XXIII, March 2015 in association with
FRANCIS XAVIER ENGINEERING COLLEGE, TIRUNELVELI
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
INTERNATIONAL CONFERENCE ON RECENT ADVANCES IN COMMUNICATION SYSTEMS AND
TECHNOLOGIES
(ICRACST’15)
TH
25 MARCH 2015
Video De-interlacing with Scene Change Detection
Based on 3D Wavelet Transform
M. Nancy Regina1, S. Caroline2
PG Scholar, ECE, St. Xavier’s Catholic College of Engineering, Nagercoil, India1
Assistant Professor, ECE, St. Xavier’s College of Engineering, Nagercoil, India2
Abstract: Video de-interlacing is the key task in digital video processing. The de-interlacing is the process of
converting source material that contains alternating half-picture to a computer screen that displays a full-picture.
The paper proposes novel scene change detection based on the 3D wavelet transform. Scene change includes cut,
dissolve and fade; the frames have particular temporal and spatial layouts. The dissolve and the fade have strong
temporal and spatial correlation; on the contract the correlation of the cut is weak. The 3D wavelet transform can
effectively express the correlation of the several frames since the low-frequency and high-frequency component
coefficients have proper statistics regularities which can effectively identify the shot transition. Three features are
computed to describe the correlation of the shot transitions, which are input to support vector machines for scene
change detection. Experimental results show that the method is effective for the gradual shot transition.
Keywords: Video de-interlacing, scene change detection, wavelet transform, support vector machine (SVM).
I. INTRODUCTION
The amount of digital videos has been increasing
rapidly and thus an effective method to analyse video is
necessary. Detection of scene changes play important roles
in video processing with many applications ranging from
watermarking, video indexing, video summarization to
object tracking and video content management. Scene
change detection is an operation that divides video data into
physical shots. Over the last three decades, scene change
detection has been widely studied and researched. As a
result, many scene change detection techniques have been
proposed and published in the literature. Scene change
detection is used for video analysis such as indexing,
browsing and retrieval. Scene changes can be categorized
into two kinds: the abrupt shot transitions (cuts) and the
gradual shot transitions (fades and dissolves).Abrupt scene
changes result from editing “cuts” and detecting them is
called cut detection either by color histogram comparison on
the uncompressed video or by DCT coefficient comparison.
Gradual scene changes result from chromatic edits, spatial
edits and combined edits. Gradual scene changes include
special effects like zoom, camera pan, dissolve and fade
in/out, etc.[7]. considerably challenging field for its lack of
drastic changes between two consecutive frames, which has a
potential mixture with local object and global camera motion.
Xinying Wang et al. [2] suggested a twice difference of
luminance histograms based framework for temporal
segmentation which is expended to detect complex transitions
between scene change as fades and dissolve. W. A. C.
Fernando [3] proposed an algorithm for sudden scene change
detection for MPEG-2 compressed video can detect abrupt
scene changes irrespective of the nature of the sequences. K.
Tse et al. [4] presents a scene change detection algorithm
which is based on the pixel differences and compressed
(MPEG-2) domains which has the potential to detect gradual
140
All Rights Reserved © 2015 IJARTET
ISSN 2394-3777 (Print)
ISSN 2394-3785 (Online)
Available online at www.ijartet.com
International Journal of Advanced Research Trends in Engineering and Technology (IJARTET)
Vol. II, Special Issue XXIII, March 2015 in association with
FRANCIS XAVIER ENGINEERING COLLEGE, TIRUNELVELI
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
INTERNATIONAL CONFERENCE ON RECENT ADVANCES IN COMMUNICATION SYSTEMS AND
TECHNOLOGIES
(ICRACST’15)
TH
25 MARCH 2015
scene changes. Anastasios Dimou et al. [5] proposed Scene
change detection for H.264 scribes the correlation between
local statistical characteristics, scene duration and scene
change and it uses only previous frames for the detection.
Seong- Whan Lee has presented a method for scene change
detection algorithm using direct edge information extraction
from MPEG video data [6].
In this paper, a scene change detection algorithm
is proposed based on the 3D wavelet transform. Compared
with the previous works, the proposed 3D wavelet
transform algorithm effectively utilizes the correlation
of several successive frames. For a cut, where the
dissimilarity of two neighbouring frames is heavy, the
correlation of the frames is weak. For the fade and the
dissolve, as the two neighbouring frames are different
in the pixel value, and similar in the edges and the texture,
the correlation of the spatial layout is very strong. The
proper statistical regularities of the coefficients can
effectively express the scene change. The three features are
defined using the 3D wavelet transform coefficients, and
then apply the Support Vector Machine (SVM) for scene
change pattern classification. This algorithm robustly
tolerates the global camera motion and the object motion,
and makes the scene change detection more accurately.
The rest of this paper is organized as follows.
Section 2 gives a detailed analysis of the proposed features.
Section 3 provides an overview of the algorithm. Section 4
& 5 presents the experimental results and conclusion.
II. PROPOSED METHOD
Scene changes happens quite often in film
broadcasting and they tend to destabilize the quality of
performance such as jagged effect, blurred effect and
artifacts effect, while de-interlacing technique is utilized.
The first stage of de-interlacing is scene change detection,
which is to ensure that the inter-field information can be
used correctly. The inter-field information is invalid; if the
scene change detected; then all interpolated pixels are taken
as intra-field de-interlacing interpolation.
progressive frames [1]. The two basic de-interlacing methods
are commonly referred as bob and weave.motion artifacts is
performed through line repetition method. It is one of the deinterlaced methods. For edge adaption de- interlacing, the
edges of the video file is detected using fuzzy logic. The FIS
rule is performed to detect the edges of interlaced video file.
Line Repetition
Line repetition method is a spatial filtering method.
It is one of the de-interlacing methods which are used to
remove the motion artifacts. Line repetition method is based
on repeating the odd (or even) rows of the image or video
frame to fill the blank even (or odd) row respectively.
B. Feature Extraction
Wavelet transform is a desirable tool to decompose
a signal into sub bands. They can represent low frequency
and high frequency information of the image accurately and
quickly. In this section, the features are defined using the 3D
wavelet transform coefficients.
For the 3D wavelet transform, the 2D wavelet
transform is performed first to each of the video frames with
the 3 decomposition level. A 1D wavelet transform is then
imposed on pixel at the same position through the resulting
successive coefficient frames. The Haar wavelet is used with
3 decomposition level for the temporal transform. The 3D
wavelet decomposition is shown in Fig.2, which shows the
LLL, LLH, LH and H subbands in the temporal direction.
Ck, l(x,y) is the wavelet transform coefficient at the pixel (x,
y) in the kth temporal and lth spatial subbands. So, a series
of coefficient is obtained in one 3D wavelet transform.
A. De-interlacing
De-interlacing is the process of taking a stream of
interlaced frames and converting it to a stream
of
141
All Rights Reserved © 2015 IJARTET
ISSN 2394-3777 (Print)
ISSN 2394-3785 (Online)
Available online at www.ijartet.com
International Journal of Advanced Research Trends in Engineering and Technology (IJARTET)
Vol. II, Special Issue XXIII, March 2015 in association with
FRANCIS XAVIER ENGINEERING COLLEGE, TIRUNELVELI
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
INTERNATIONAL CONFERENCE ON RECENT ADVANCES IN COMMUNICATION SYSTEMS AND
TECHNOLOGIES
(ICRACST’15)
TH
25 MARCH 2015
c)
Low-frequency component coefficients difference
The static frames, which are the same image in the
successive frames, have an approximate characteristic in
high-frequency component with a gradual shot transition.
However, different behaviours exist in low-frequency
component coefficients. This is computed from the LLL
subband and the LLH subband in fig 2.
sliding window. The sliding window moves for m frames
once.
Now, the three features are defined using the low-frequency
and high-frequency component coefficients in Fig.1.
 c
(x, y)   c 2,1(x, y)
1,1
DL(i) 
x
y
x
 c
y
(x, y)
1,1
a)
x
y
High-frequency component coefficient difference
The difference of the high-frequency image is
Gradual shot transition frames also are more
coefficients, which is smaller in the gradual shot transition, different than the static frames, so the difference between the
is calculated to identify the gradual shot transition. It is first temporal subband and the second temporal subband in
described as VH(i) which is given by,
the gradual shot transitions is smaller than in the static
2
10
frames in Fig.2. The difference of the lowest frequency
component of the frame 1 and the frame 2 can make a
VH (i) 
(c5,l (x, y)  c 6,l (x, y)) 
x
y l 1
distinction between the gradual shot transitions and the static
2
10
frames.

 (c
x
y
l 1
6,l
(x, y)  c 7,l (x, y)) 
2
10
 (c
x
y
l 1
III. SCENE CHANGE DETECTION
7,l
(x, y)  c8,l (x, y))
this is computed from the H subband in Fig.2.
In this section, the scene change detection
algorithm is described which employs the features defined in
section2. The features VH(i), EH(i) and DL(i) describes the
correlation among the successive frames from ith frame to
the i + 7 th frame.
b)
High-frequency component coefficient energy
A. Framework of Scene Change Detection
When a gradual shot transition occurs, the edges
and textures in the successive frames are similar. However,
In this section, the framework for detecting the
large differences appear when cuts or motions (such as local
scene change is described as shown in Fig.4.The input
object motions and global camera motions) occur.
sequence is in the form of video sequence. First of all,
In the 2D wavelet transform, the high frequency
we move the sliding window and perform the 3D wavelet
transform in the window. The features VH(i), EH(i) and
component coefficients reflect the edges and textures of a
DL(i) are computed in the 3D wavelet transform
frame. In the 3D wavelet transform, the high-frequency
coefficient. Then the three features are given as input to
component coefficients reflect the dissimilarity of the edges
the SVM. The SVM is used to detect the gradual shot
and textures of the successive frames. Moreover, a small
transition. Then the feature DL(i) is used to distinguish
difference in the edges and textures of the frames can
the fades from the dissolves.
contribute to big coefficient. Therefore, the high-frequency
component coefficients energy is defined as,
142
All Rights Reserved © 2015 IJARTET
ISSN 2394-3777 (Print)
ISSN 2394-3785 (Online)
Available online at www.ijartet.com
International Journal of Advanced Research Trends in Engineering and Technology (IJARTET)
Vol. II, Special Issue XXIII, March 2015 in association with
FRANCIS XAVIER ENGINEERING COLLEGE, TIRUNELVELI
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
INTERNATIONAL CONFERENCE ON RECENT ADVANCES IN COMMUNICATION SYSTEMS AND
TECHNOLOGIES
(ICRACST’15)
TH
25 MARCH 2015
8
10
EH (i)   ( | ck , l (x, y) |)
x
y
k 5 l 8
this is computed from the H subband in Fig.2.
output image. Thus to compute the PSNR and MSE is
calculated using the following equation.
MSE 
&
PSNR






1
MN
M 1 N 1
 [ A(i, j) A' (i, j)]
m0 n0
2

10 log MAX

10  
 MSE




Fig.4 The framework for scene changedetection
B. Detect the Gradual Shot Transition
Besides the cuts, the video frames can be classified
as the gradual shot transitions, the static frames and the
motion frames (local object motions and global camera
motions). The features VH(i), EH(i) can be utilized to
distinguish the motion frames from the static frames and the
The proposed algorithm has been implemented and
gradual shot transition. The DL(i) feature can distinguish the
applied
for
variety of video sequences. The motion adaption
static frames from the gradual shot transition.
Now, we employ Support Vector Machine (SVM)
for the gradual shot transition recognition. In
our
application, we employ C-Support Vector Classification
(CSVC) for the gradual shot transition recognition. The
vectors of [VH(i), EH(i),DL(i) ] are trained and classified by
and edge adaption are de-interlaced through
the SVM into the gradual shot transition, the static frames
simulation. Then, the performance metrics of the
MATLAB
and the motion frames.
video file is evaluated through PSNR and MSE values. The
figure shows the results of video de-interlacing of motion and
IV. EXPERIMENTAL RESULT
143
All Rights Reserved © 2015 IJARTET
ISSN 2394-3777 (Print)
ISSN 2394-3785 (Online)
Available online at www.ijartet.com
International Journal of Advanced Research Trends in Engineering and Technology (IJARTET)
Vol. II, Special Issue XXIII, March 2015 in association with
FRANCIS XAVIER ENGINEERING COLLEGE, TIRUNELVELI
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
INTERNATIONAL CONFERENCE ON RECENT ADVANCES IN COMMUNICATION SYSTEMS AND
TECHNOLOGIES
(ICRACST’15)
TH
25 MARCH 2015
true detection (performed by the detection algorithm) with
respect to the overall events (scene changes) in the video
streams. Similarly, the precision is the percentage of correct
detection with respect to the overall declared event. The
three video sequences are used for evaluating the
performance of the proposed detection algorithm.
Approximately 1000 frames including dissolve and nondissolve are used for SVM training and set Tfd=0.93. The
results of proposed algorithm are listed in TABLE II. The
recall and precision are defined as,
Pre = Nc /(Nc + Nm)
Ppre = Nc /(Nc + Nf )
Where,
Nm
Nf
Nc
video
culture
donqui
eyeexam
video
culture
donqui
eyeexam
number of missed detection.
number of false alarms.
number of correct detection.
TABLE II
Performance of the proposed algorithm
Fade
Nc
Nm
Nf
Pre
Ppre
10
0
1
1
0.90
12
0
2
1
0.85
47
4
3
0.94
0.94
Dissolve
Nc
Nm
Nf
Pre
Ppre
28
12
16
0.7
0.93
10
5
9
0.66
0.83
56
31
11
0.64
0.83
REFERENCES
[1] G.de Haan, E.B. Bellers, De-interlacing: an overview, proceeding of the
IEEE (1998) 1839-1857.
[2] Wang, Xinying, Zhengke VCT eng 2000, Scene Abrupt Change
Detection, In: Electrical & computer Engineering, CanadianConference
on, vol.2, pp.880-883.
[3] W.A.C.Fernando, C.N.Canagarajah, Bull, D.R. 2001 Scene change
Detection algorithms for content based video indexing and retrieval,
Electronics & Communication Engineering Journal, vol. 13, Issue: 3,
pp. 117-126.
[4] Tse, K., J. Wei, J., S. Panchanathan,S 1995 A Scene Change Detection
Algorithm for MPEG Compressed Video Sequences, In: Electrical &
computer engineering, Canadian conference on, vol.02, pp.827-830.
[5] AnastasiosDimou, 2005 Scene Change Detection for H.264 Using
Dynamic Threshold Techniques, In: Proceedings of 5th EURASIP
Conference on Speech and Image Processing, Multimedia
Communications and Service.
[6] Lee , Seong -Whan , Kim, Young-Min , Choi, Sung Woo 2000 Fast
Scene Change Detection using Direct Feature, Extraction fromMPEG
Compressed Videos, In: „IEEE Transactions on multimedia, Dec. 2000,
No. 2, issue.4, pp: 240-254.
[7] C.L. Huang and B.Y. Liao, A robust scene-change detection method for
video segmentation, IEEE Trans. Circuits and Systems for Video
technology, Dec. 2000, vol. 11, no. 2, pp:1281-1288.
BIOGRAPHY
ACKNOWLEDGMENT
The author would like to thank Mrs. S. Caroline for her
useful suggestions and comments.
M. Nancy Regina pursuing her M.E
V. CONCLUSION
Applied Electronics in St. Xavier‟s
The de-interlacing of video material converted from film
Catholic College of Engineering. Her
can be perfect, provided it is detected correctly. The
area of interest is image processing.
occurrences of scene change affect the quality of deinterlacing seriously if they are not processed properly. The
proposed scene change detection scheme is based on 3D
wavelet transform. Three features are computed to describe
Second Author: S. Caroline, Assistant Professor,
the correlation of the shot transitions, which are input to
Department of ECE. She is currently working in St. Xavier‟s
support vector machines for scene change detection.
Catholic College of Engineering
Experimental results show that the method is effective for
the gradual shot transition. Based on the better performance
144
in the experiment, it is able to identify more types of the shot
transition in video sequences.
140
All Rights Reserved © 2015 IJARTET