Computer Vision Based Real-time Fire Detection Method ⋆ 1

Journal of Information & Computational Science 12:2 (2015) 533–545
Available at http://www.joics.com
January 20, 2015
Computer Vision Based Real-time Fire
Detection Method ?
Sumei He, Xiaoning Yang, Sitong Zeng, Jinhua Ye, Haibin Wu ∗
School of Mechanical Engineering and Automation, Fuzhou University, Fuzhou 350108, China
Abstract
The video based detection of fire hazards is one of the significant developments in recent years. However,
the conventional vision based flame detection algorithms suffer with the issues of low detection rate
and high false-alarm rate. A novel real time video processing method is proposed in this paper that
detects the flames by combining the flame motion detection technique with color clues and flame flicker
to reach the final detection. The proposed fire detection framework builds an efficient background
model by optimizing the selective background update to extract the fire-like moving regions in the video
frames. Furthermore, a YCbCr color space based analysis technique is applied to improve the fire-pixel
classification. Finally, a flame flicker identification algorithm based on the statistical frequencies is used
to confirm whether it is fire region. The experimental results show that the proposed algorithm has
high detection rate and low false-alarm rate, it is accurate, robust and effective compared to the existing
methods.
Keywords: Fire Detection; Motion Object Detection; Color Analysis; Flickering Analysis
1
Introduction
The development of society and the economy has commenced the construction of various kinds
of large buildings all over the cities. The conventional fire detection equipment, the fire hydrant
and the automatic sprinkler devices are proved insufficient to meet the demands of the large
buildings. In recent years, a range of autonomous fire-fighting robots has been developed; these
are capable of detecting, locating, and extinguishing the fire automatically. It meets the need of
the fire-fighting for the large buildings to a certain extent.
These fire-fighting robots are mainly equipped with the traditional detection sensors of infrared
and ultraviolet. These sensors have limited detection capabilities that lead to associated issues
such as false alerts, inaccurate fire positioning, delayed water spraying, and inefficient fire-fighting.
The fire detection based on video images can overcome the shortcomings of the traditional fire
?
Project supported by the National Natural Science Foundation of China (No. 51175084) and A Type Project
of Educational Commission of Fujian China (No. JA13023).
∗
Corresponding author.
Email address: [email protected] (Haibin Wu).
1548–7741 / Copyright © 2015 Binary Information Press
DOI: 10.12733/jics20105245
534
S. He et al. / Journal of Information & Computational Science 12:2 (2015) 533–545
detection equipment [1]. It uses the video camera as the fire detection sensor to detect the fire in
real-time; analyze the fire behavior; and perform a three-dimensional localization of the fire. It
shows a wide application prospect.
In recent years, many researchers have made efforts in the video fire detection methods. These
research efforts have played a significant role in the development of many useful video fire detection
algorithms [2]. These fire detection algorithms are based on both the static features like fire
luminance, fire color and fire shape, and the dynamic features like flame flicker and fire edge
wobbling. These features have been exploited by various researchers in recent studies. Cho
[3, 4, 5] has presented the fire color detection algorithm based on different color spaces. Shen [6]
presented the idea of using the flicker frequency as the criterion point. Toreyin [7] made use of
the temporal and spatial wavelets to analyze the time-varying characteristic of the fire edge color
under high frequencies and the spatial variation of the color; he also used the Markov model to
describe the condition of the fire flicker. However, this technique may be fail in complex scenes
with variable lighting conditions and background color matching with the fire color.
This article proposes a comprehensive fire detection method based on the motive features,
color features, and the flame flicker features. The experimental results indicate that the proposed
method offers high efficiency and robustness compared to the existing vision based fire detection
methods.
2
The Fire Detection Based on the Improved Selective
Background Update Model
Generally, the flames of fire present a kinetic characteristic of transformation and development.
For such a reason, the fire-like moving regions can be extracted by detecting the moving targets.
The static disturbance is eliminated from the background in the monitoring range. The existing
moving object detection methods are mainly divided into three categories: optical flow method;
frame differential method; and the background subtraction method. Each method has its own advantages and limitations. The most significant feature of optical flow method [8] is its applicability
to the applications with the moving video cameras; however, it is sensitive to luminance, shield
and noise. Moreover, it demands a support of highly efficient computational hardware to meet
the real-time requirements of the practical applications. The frame differential method [9, 10] is
easy to realize; however, it cannot retrieve the complete information of the moving targets. The
background subtraction method [11, 12] is suitable to extract detailed information of the moving
targets; however, it is not suitable in the situations where the background of the monitoring scene
changes with time. The difference between the actual background and the initialized background
grows larger with time that makes it unsuitable to monitor the complex condition too long. The
comparison of the frame differential method and the background subtraction method in extracting
the flame motion is shown in Fig. 1.
As shown in Fig. 1, both the frame differential method and the background subtraction method
are not desirable for the specific moving target of the burning flame. The frame differential method
cannot extract the complete flame information, so it is useless for further detection. Although
the latter does not lose any information, it may consider a non-fire object as a flame.
For better robustness of the moving objects detection, the key factors in the background subtraction method are to establish a robust background model [13] and update the background
S. He et al. / Journal of Information & Computational Science 12:2 (2015) 533–545
(a)
(b)
(c)
(d)
(e)
(f)
535
Fig. 1: (a) 115th original image; (b) The test result of (a) by the Fame differential method; (c) The test
result of (a) by the Background subtraction method; (d) 149th original image; (e) The test result of (d)
by the Fame differential method; (f) The test result of (d) by the Background subtraction method. The
comparison of the flame motion detection methods
continuously. This enables the background model to better approximate the real environment
background at each frame, and then extracts the moving targets by calculating the difference
between the current image and the background image [14]. The key of the background modeling
is the background update algorithm. Many researchers have presented methods to dynamically
update the background; for example, first-order Kalman Filters, W4 method, statistical average
method, Gaussian model [15], and widely used Gaussian mixture model.
The Gaussian mixture background model was proposed by Stauffer et al., it has excellent
capability to adapt the environment. However, it needs to establish several Gaussian models
for each pixel and continuously update the Gaussian models of each pixel that makes it highly
complex.
Considering the requirements of the real-time applications and accuracy in the flame detection,
a motion detection algorithm should be capable of processing the complex monitoring scenes and
extract the complete flame information [7]. This article presents an improved algorithm named
as selective background update model that is more suitable for complex scenes.
The proposed method updates the selected pixels of the background rather than updating each
pixel of the background in the monitoring video. The main idea is based on the fact that the
current frame image consists of a moving foreground image Ft (x, y) and the background image
Bt (x, y). The foreground image Ft (x, y) is extracted from the image using a movement threshold
M Tt (x, y). The pixels that belong to the foreground image will not be updated, while the
background Bt−1 (x, y) of the remaining pixels that belong to the previous frame will be updated
into the current frame Bt (x, y) with a certain rate. The equations to extract the moving objects
and updating the background are shown in Eq. (1, 2) and Eq. (3):
Dt (x, y) = |Ct (x, y) − Bt (x, y)|
(1)
536
S. He et al. / Journal of Information & Computational Science 12:2 (2015) 533–545
(
Mt (x, y) =
1 if Ct (x, y) ≥ M Tt (x, y)
0 if Ct (x, y) < M Tt (x, y)


α × Bt (x, y) + (1 − α) × Ct (x, y)



 if M (x, y) == 0
t
Bt+1 (x, y) =

Bt (x, y)




if Mt (x, y) == 1
(2)
(3)
In Eq. (1)-(3), the parameter α is the update coefficient that represents the update rate. The
update rate is higher when the parameter α is smaller and vice versa. The range of the parameter
α is between 0 ∼ 1. The algorithm is presented in [7]. It has been found empirically that the
value of α equal to 0.85 produces better results.
At the same time, it can be concluded from the theory and the experiments that, the background
update model in [7] cannot adapt to the situation when the permanent movements appear in the
monitoring range. The permanent movement is the situation when the objects in the video images
do not return to the initial position after their displacement; it also means that the moving objects
stop permanently after entering into the monitoring video.
Considering this case, the background update model needs be optimized for improved robustness
of the method. The key feature of the permanent movements is that the pixels do not change for
a long time after they transform from the background to the moving foreground, and these are
different from the general moving objects. Hence, a counter Counter(x, y) is arranged for each
pixel of the image; if it is detected as a moving foreground pixel for a long time (for example
continuous 100 frames), it will be considered as permanently moving background pixel rather
than the moving target. This pixel will be updated as background pixel. The flowchart is shown
as Fig. 2.
Pixel X(x, y)
Front (x, y)
Ν
Y
Counter=counter+1
Counter≥counter_T
N
Y
Set it as the background
pixel and update the model;
Counter=0
Counter=0
Get the new pixel
Fig. 2: The flowchart of the detection to the permanent moving object
537
S. He et al. / Journal of Information & Computational Science 12:2 (2015) 533–545
The X(x, y) is the input pixel and Counter(x, y) is the counter of X(x, y). The Counter(x, y)
is used to count the number of consecutive frames where the pixel X(x, y) is detected as moving
foreground pixel; Counter T is the global threshold that can be determined based on the dynamic
environment. If the desired target moves fast, the Counter T should be set to a larger value
and vice versa. In order to verify the effectiveness of the algorithm, an experiment to compare
the improved selective background update model and the widely used Gaussian mixture model
is conducted. The results of the experiment are shown in Fig. 3. The results indicate that
the improved selective background update model can better adapt to the complex environment
compared to the Gaussian mixture model. The background model can better approximate the
real background of the monitoring environment. The complete information of the moving target
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
(m)
(n)
(o)
(p)
(q)
(r)
(s)
(t)
Fig. 3: (a) 5th original images; (b) The Gaussian mixture model of (a); (c) The improved selective
background update model of (a); (d) The target extraction result of (a) by the Gaussian mixture model;
(e) The target extraction result of (a) by the improved selective background update model; (f) 466th
original image being thrown a white box; (g) The Gaussian mixture model of (f); (h) The improved
selective background update model of (f); (i) The target extraction result of (f) by the Gaussian mixture
model; (j) The target extraction result of (f) by the improved selective background update model; (k)
533th original image that fire is just removed; (l) The Gaussian mixture model of (k); (m) The improved
selective background update model of (k); (n) The target extraction result of (k) by the Gaussian mixture
model; (o) The target extraction result of (k) by the improved selective background update model; (p)
722th original image; (q) The Gaussian mixture model of (p); (r) The improved selective background
update model of (p); (s) The target extraction result of (p) by the Gaussian mixture model; (t) The
target extraction result of (p) by the improved selective background update model. The comparison of
the background models
538
S. He et al. / Journal of Information & Computational Science 12:2 (2015) 533–545
can be extracted using the proposed method. The Gaussian mixture model is not suitable in this
application due to large variations in the ambient light, and it has more noise in the resulting
foreground image; while the selective background update model has less noise; furthermore, the
extracted information of the moving target is complete in the case of the proposed method.
Considering the computational efficiency, the Gaussian mixture model needs to establish multiple
Gaussian models for each pixel and update them constantly, while the selective background
update model needs only to update the selected pixels based on the motion detection results.
The selective update of the background pixels requires less computational effort compared to the
existing methods.
3
Fire Detection Based on the Color
The fire color is quite different from the surroundings that play a significant role in fire detection.
Most of the fire detection systems have introduced the color detection model. The studies such as
[3, 4, 16] analyzed and extracted the fire color in the RGB color space, on the other hand, Bo-Ho
Cho et al. [5] extracted the fire color in the HIS color space.
The RGB color space shows different colors by mixing different percentages of the three RGB
primary colors that makes it difficult to express each color with a precise value. It will be difficult
to establish a quantitative analysis of the color. Furthermore, it cannot take full advantage of the
luminance information in the RGB space. The method presented in [5] extracts the fire pixels
using HIS color space; however, it fails to improve the true positive and false negative cases due
to a static threshold.
Considering the above mentioned aspects, this article introduces a fire color detection method
based on the YCbCr color space. The constraint rules on the fire pixels are given in Eq. (4) [17]:


rule 1 : Y (x, y) > Cb(x, y)





 rule 2 : Cr(x, y) > Cb(x, y)



 rule 3 : Y (x, y) > Y
mean
(4)
 rule 4 : Cb(x, y) < Cbmean





rule 5 : Cr(x, y) > Crmean




 rule 6 : |Cb(x, y) − Cr(x, y)| ≥ τ
The parameter τ in the equation is a predefined threshold; Ymean , Cbmean , Crmean represent
the average of the luminance information, the blue difference and the red difference respectively;
These can be calculated as follows:
Ymean
K
1 X
Y (xi , yi )
=
K i=1
K
1 X
Cb(xi , yi )
K i=1
(6)
K
1 X
=
Cr(xi , yi )
K i=1
(7)
Cbmean =
Crmean
(5)
539
S. He et al. / Journal of Information & Computational Science 12:2 (2015) 533–545
The parameter K denotes the sum of all the image pixels.
In order to explain the efficacy of the proposed fire color detection method, the flames are
extracted from the images in different scenes using the fire detection algorithms presented in
[3, 4, 5] and the proposed method. The results of the fire detection using all the four methods
are presented without any subsequent processing in Fig. 4.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
(m)
(n)
(o)
(p)
(q)
(r)
(s)
(t)
Fig. 4: (a) Original image; (b) Result of Method in [3]; (c) Result of Method in [4]; (d) Result of Method
in [5]; (e) Result of Proposed Method in this article; (f) Original image; (g) Result of Method in [3];
(h) Result of Method in [4]; (i) Result of Method in [5]; (j) Result of Proposed Method in this article;
(k) Original image; (l) Result of Method in [3]; (m) Result of Method in [4]; (n) Result of Method in
[5]; (o) Result of Proposed Method in this article; (p) Original image; (q) Result of Method in [3]; (r)
Result of Method in [4]; (s) Result of Method in [5]; (t) Result of Proposed Method in this article. The
comparison of fire detection performance of different methods used in this study
As shown in Fig. 4, the method in [3] has detected the whole flame information; however, it
considered many non-flame pixels as flame pixels. Although the method in [5] is better compared
to [3], it also has the problem of false detection. The method in [4] detected the flame targets in
some of the cases while failed in others. The proposed method has demonstrated capabilities to
fit various situations where other algorithms failed to detect.
540
4
S. He et al. / Journal of Information & Computational Science 12:2 (2015) 533–545
The Detection and Analysis of Visual Fire Information
Based on the Flame Flicker Feature
The flame flicker feature is one of the significant dynamic features and it is also an important basis
in detecting the flames. A number of researchers have presented methods to detect flame using
the flame flicker features. For example, Jinhua Zhang et al. [18] detected the flame by analyzing
the Fourier spectrum characteristics of sharp changes in the suspected flame region. Toreyin [19]
obtained the flicker frequencies using the wavelet transform to detect the flames. Feiniu Yuan et
al. [20] presented a method based on the flame outline ripple measurement to extract the flame
flicker feature.
The methods discussed in preceding paragraph have proposed different flicker frequency techniques to detect the flame. All of these procedures require transformations from the spatial
domain to the frequency domain. These transformations require a significant amount of calculations that limit the use to the real-time systems. In order to ensure the real-time characteristic of
the algorithm while keeping the advantage of the flame flicker feature [21], this article proposes a
method based on the flame flicker features to detect flame in the spatial domain.
Firstly, a counter T imer(x, y) is established for each pixel to count the change of the pixel
value X(x, y). If the pixel X(x, y) in the two closed frames switches from background pixel
to flame pixel or vice versa, the corresponding counter T imer(x, y, t) is incremented by one.
Similarly, if the pixel X(x, y) in the two closed frames remains unchanged, the corresponding
counter T imer(x, y, t) is added with a zero. Secondly, the judging criterion to decide whether the
pixel has changed between the background pixel and the flame pixel is based on the luminance
value. If the difference of the luminance values of the pixel X(x, y) at time t and time(t-1) is
larger than a predefined threshold ∆TY , the pixel will be considered changed. The phenomena
are explained in Eq. (8, 9):


T imer(x, y, t − 1) + 1



 if (|∆Y (x, y, t)| ≥ ∆T )
Y
T imer(x, y) =

T imer(x, y, t − 1) + 0




if (|∆Y (x, y, t)| ≤ ∆TY )
(8)
∆Y (x, y, t) = Y (x, y, t) − Y (x, y, t − 1)
(9)
The T imer(x, y, t) and T imer(x, y, t − 1) in Eq. (8, 9) represent the counter values of the
pixel X(x, y) at time t and time (t-1) respectivelythe Y (x, y, t) and Y (x, y, t − 1) represent the
luminance values of the pixel X(x, y) at time t and time (t-1) respectively. The luminance value
Y (x, y, t) is the Y component of the YCbCr color model which is used in color detection. The
counter value T imer(x, y, t) of the pixel will be larger than the threshold in a given time due to
the flame flicker feature. Therefore, the constraint condition of the flicker is expressed as:
(T imer(x, y, t) − T imer(x, y, t − n)) ≥ Tf
(10)
The factor n is the given sequence length or the time span, and the step length between two
adjacent frames is 1; Tf is a predefined flicker threshold. The sequence is updated according to
Fig. 5. For this experiment, the n is set as 25, and the updated length of the sequence is set as 1.
541
S. He et al. / Journal of Information & Computational Science 12:2 (2015) 533–545
z
SequenceB
}|
{
ak, ak + 1, ak + 2 · · · ak + n, ak + n, ak + n + 1 · · ·
{z
}
|
SequenceA
Fig. 5: The sequence update of the flicker feature
After several trials, it is observed that the flicker threshold Tf equal to 8 can produce better
results. To some extent, the size of real flame and the distance between video camera and flame
will affect the number of the pixels that satisfy the Eq. (10). The binary image of the suspected
flame region can be obtained after motion detection and color detection from the source image.
The flame flicker index is given as:
Ri = N U Mif /N U Micm ≥ λ
(11)
The N U Micm represents the pixel number of the white objects in all regions and the N U Mif
0.5
0.4
R
0.3
0.2
0.1
0
0
400
(c)
0.20
0.20
0.15
0.15
0.10
0.10
R
R
(a)
100
200
300
Number of frames
(b)
0.05
0.05
0
0
50 100 150 200 250 300 350
Number of frames
(d)
0
0
(e)
100
200
300
Number of frames
(f)
400
1.0
R
0.5
0
−0.5
−1.0
0
(g)
50
100 150 200 250 300
Number of frames
(h)
Fig. 6: (a) The fire video; (b) The flicker feature of (a); (c) The flash light video; (d) The flicker feature
of (c); (e) The fluttering red flag video; (f) The flicker feature of (e); (g) The illumination video; (h) The
flicker feature of (g). The videos and the analysis of the flicker feature
542
S. He et al. / Journal of Information & Computational Science 12:2 (2015) 533–545
represents the number of the pixels that satisfy the Eq. (10); the parameter λ is the test threshold.
After the motion detection and the color detection, the candidate suspected regions that dissatisfy the Eq. (11) will be deemed as non-flame area. To verify the effectiveness of the algorithm
to reject most of the non-flame region, an experiment is conducted with the selected videos. The
results are shown in Fig. 6. It is easy to discover that it can distinguish the flame from the
non-flame regions accurately by analyzing the flicker feature.
5
Flame Detection Experiment
To test the efficacy of the proposed flame detection algorithm, the flame motion feature, color
features and flicker features are combined together. The flow chart of the algorithm is shown in
Fig. 7. The nine video segments recorded under different typical scenes are used as the test cases
(as there are no authenticated video sets for testing; a part of the experimental videos in this
article come from the Machine Vision Laboratory of the Bilkdent University [22], and a part of
the videos were recorded by authors while the others come from the Internet). Fig. 8 shows the
scenes in the test video database.
Import the video
Build the improved selective
background update model and
detect the moving target
N
Does the moving
target exist?
Y
The color detection
to the target
Does the flame
candidate region exist?
N
Y
The stair alarm; analyze the
flicker feature to the each
flame candidate region
Satisfy
the condition?
N
Y
Recognize the flame and
make the second level alarm
Fig. 7: The algorithm flow chart
543
S. He et al. / Journal of Information & Computational Science 12:2 (2015) 533–545
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
Fig. 8: (a) The flashing lights; (b) The fluttering red flag; (c) The yellow-red lights; (d) The firewood
which is lit under the hot sun; (e) The torch in the courtyard; (f) The fire close to road at night time;
(g) The weeds flame; (h) The burning newspaper on stairs; (i) The burning gasoline in a warehouse.
The test video database
Table 1: The result of the tests to the videos
video
N
n
f+
f−
Rd (%)
a
592
0
29
0
95.78
b
472
0
0
0
100
c
708
692
0
8
98.87
d
926
786
0
5
99.46
e
578
462
31
0
94.64
f
774
734
3
0
99.61
g
1215
987
0
7
99.42
h
802
589
0
3
99.63
sum
7191
4250
91
23
98.41
544
S. He et al. / Journal of Information & Computational Science 12:2 (2015) 533–545
The experiment is performed on computer machine with 1.9 GHz CPU, 1 GB RAM, and
VC++6.0. The execution time of the method to the image pixels with size of 320 ∗ 240 is 24
frames per second. The results are shown in Table 1; N is the total number of frames in the
video segment; n is the number of image frames that contained fire flames; f+ is number of
non-flame image frames which are detected as flame image frames, while f− is the number of
flame image frames which are detected as non-flame image frames; R is detection accuracy where
R = (N − f+ − f− )/N .
6
Conclusion
A vision based fire detection method offers better detection capability compared to traditional
fire detection sensors. The proposed vision based fire detection method has proven enhanced
detection capabilities compared to the existing video based solutions. This method combines
motion features, color features, and the flicker feature. The experiments have verified that the
method has a high accuracy rate, robustness, and high efficiency. It has high computational
efficiency and can be used in real time applications.
References
[1]
H. Xu, Y. Qin, Y. Pan et al., A flame detection method based on the Amount of Movement of
the Flame Edge, 2013 Fourth International Conference on Intelligent Control and Information
Processing (ICICIP), IEEE, 2013, 253-256
[2]
S. Bayoumi, E. AlSobky, M. Almohsin et al., A real-time fire detection and notification system
based on computer vision, 2013 International Conference on IT Convergence and Security (ICITCS), IEEE, 2013, 1-4
[3]
Bo-Ho Cho, Jong-Wook Bae, Sung-Hwan Jung, Image processing-based fire detection system using statistic color model, International Conference on Advanced Language Processing and Web
Information Technology, 2008, 245-250
[4]
T. Celik, H. Demirel, H. Ozkaramanli, M. Uyguroglu, Fire detection using statistical color model
in video sequences, Journal of Visual Communication and Image Representation, 18(2), 2007,
176-185
[5]
Wen-Bing Homg, Jim-Wen Peng, Chih-Yuan Chen, A new image-based real-time flame detection
method using color analysis, Proceedings of the 2005 IEEE International Conference on Networking, Sensing and Control, 2005, 100-105
[6]
Shilin Shen, Chunyu Yu, Feiniu Yuan et al., Renovated method for identifying fire plume based
on image correlation, Journal of Safety and Environment, 7(6), 2007, 96-99
[7]
B. U. Toreyin, Y. Dedeoglu, U. Gudukbay et al., Computer vision based method for real-time fire
and flame detection, Pattern Recognition Letters, 27(1), 2006, 49-58
[8]
B. U. Toreyin, Y. Dedeoglu, A. E. Cetin, Flame detection in video using hidden markov models,
Proc. 2005 International Conference on Image Processing (ICIP 2005), Genoa, Italy, 2005, 24572460
[9]
D. Meyer, J. Denzle, H. Niemann, Model based extraction of articulated objects in image sequences
for gait analysis, Proc. IEEE International Conference on Image Processing, Santa Barbara, California, 1997, 78-81
S. He et al. / Journal of Information & Computational Science 12:2 (2015) 533–545
545
[10] A. Lipton, Fujiyoshih, R. Patil, Moving target classification and tracking from real-time video,
Proc. IEEE Workshop on Application of Computer Vision, Princeton, NJ, 1998, 8-14
[11] C. Ning, D. Fei, Flame object segmentation by an improved frame difference method, 2012 Third
International Conference on Digital Manufacturing and Automation (ICDMA), IEEE, 2012, 422425
[12] I. Hartaohu, D. Harwood, L. Davis, Real-time surveillance of people and their activities, IEEE
Trans. Pattern Analysis and Machine Intelligence, 22(8), 2000, 809-830
[13] S. S. Mohamed, N. M. Tahir, R. Adnan, Background modeling and background subtraction performance for object detection, 2010 6th International Colloquium on Signal Processing and Its
Applications (CSPA), IEEE, 2010, 1-6
[14] N. Gheissari, A. Bab-Hadiashar, A comparative study of model selection criteria for computer
vision applications, Image and Vision Computing, 26(12), 2008, 1636-1649
[15] J. Bang, D. Kim, H. Eom, Motion object and regional detection method using block-based background difference video frames, 2012 IEEE 18th International Conference on Embedded and RealTime Computing Systems and Applications (RTCSA), IEEE, 2012, 350-357
[16] B. H. Cho, J. W. Bae, S. H. Jung, Image processing-based fire detection system using statistic
color model, International Conference on Advanced Language Processing and Web Information
Technology, New York, IEEE, 2008, 245250
[17] C. Stauffer, Grimson WEL, Learning patterns of activity using real-time tracking, IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8), 2000, 747-757
[18] Turgay Celik, Hasan Demirel, Fire detection in video sequences using a generic color model, Fire
Safety Journal, 44, 2009, 147-158
[19] J. H. Zhang, J. Zhuang, H. F. Du, A new flame detection method using probability model, International Conference on Computational Intelligence and Security, Guangzhou, China, November
2006, 1614-1617
[20] Feiniu Yuan, Guangxuan Liao, Yong-ming Zhang, Yong Liu, Chunyu Yu, Jinjun Wang, Binghai
Liu, Feature extraction for computer vision based fire detection, Journal of University of Science
and Technology of China, 36(1), 2006, 39-43
[21] L. Xu, Y. Yan, An improved algorithm for the measurement of flame flicker frequency, Instrumentation and Measurement Technology Conference, IEEE, 2004, 2278-2281
[22] Information on http://signal.ee.bilkent.edu.tr/VisiFire/Demo/