Please wait a minute...
J4  2012, Vol. 46 Issue (4): 698-704    DOI: 10.3785/j.issn.1008-973X.2012.04.018
    
Distinguishable method for video fire detection
XIE Di1, TONG Ruo-feng1, TANG Min1, FENG Yang2
1. College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China;
2. College of Computer Science and Technology, Zhejiang Police College, Hangzhou 310053, China
Download:   PDF(0KB) HTML
Export: BibTeX | EndNote (RIS)      

Abstract  

A novel method for video fire detection based on artificial neural network was proposed in order to estimate fire regions and predict occurrence of fire in video surveillance system. Temporal-spatial features were analyzed including flicker frequency and geometry of fire by Fourier transformation, roundness degree analysis and corner detection based on motion and three-dimensional color features. Then these features were used as a probability vector fed for artificial neural network classification model to output the probability of fire. With keeping accurate detection rate, two problems, local extremum tendency and slow convergence rate, of artificial neural network were solved by optimal parameters combination experimentally. The method can distinguish flickering vehicle light from actual fire in varied locations, e.g. tunnel, warehouse and museum, to prevent ambiguity and eliminate the influence environment illumination imposing onto the detection results which significantly reduce false positive rate. Experimental results show that the method has a 96% detection rate while keeping a real-time performance.



Published: 17 May 2012
CLC:  TP 391.41  
Cite this article:

XIE Di, TONG Ruo-feng, TANG Min, FENG Yang. Distinguishable method for video fire detection. J4, 2012, 46(4): 698-704.

URL:

http://www.zjujournals.com/eng/10.3785/j.issn.1008-973X.2012.04.018     OR     http://www.zjujournals.com/eng/Y2012/V46/I4/698


具有高区分度的视频火焰检测方法

为了在视频监控系统中准确地判断火焰区域并预测火灾的发生,提出一种新的基于人工神经网络的视频火焰检测方法.该方法在分析火焰的运动和三维颜色特征的基础上,分别通过傅里叶变换和圆形度分析、角点检测的方法研究火焰的闪烁频率、几何形状对应的时空域特征,采用获得的各类特征构成概率向量作为人工神经网络分类模型的输入,输出表示火灾发生的概率.在保持检测准确率的同时,该方法通过实验选择最优的参数组合解决神经网络容易陷入局部极值及收敛慢的问题.该方法可以区分大空间(隧道、仓库、博物馆等建筑物)中闪烁的车灯和真实火焰,能够避免在实际的视频监控系统应用中将闪烁车灯误判为火焰,有效减少环境光对检测结果的影响,降低火灾火焰的误报率.实验结果表明,采用该方法在保持检测实时性的同时,能够达到96%的检测正确率.

[1] HEALEY G, SLATER D, LIN T, et al. A system for realtime fire detection [C]∥ IEEE Computer Vision and Pattern Recognition Conference. New York: IEEE, 1993: 605-606.
[2] PHILLIPS W, SHAH M, LOBO N. Flame recognition in video [C]∥ Proceedings of the 5th IEEE Workshop on Applications of Computer Vision. Palm Springs: IEEE, 2000: 224-229.
[3] LIU C B, AHUJA N. Vision based fire detection [C]∥ IEEE International Conference on Pattern Recognition. Washington DC: IEEE, 2004: 134-137.
[4] TOREYIN B U, DEDEOGLU Y, GUDUKBAY U, et al. Computer vision based method for realtime fire and flame detection [J]. Pattern Recognition Letters, 2006, 27(1): 49-58.
[5] TOREYIN B U, DEDEOGLU Y, GUDUKBAY U, et al. Realtime fire and flame detection in video [C]∥ International Conference on Acoustics Speech, and Signal Processing. Philadelphia: IEEE, 2005: 669-672.
[6] TOREYIN B U, DEDEOGLU Y, CETIN A E. Flame detection in video using hidden Markov models [C]∥ International Conference on Image Processing. Genoa: IEEE, 2005: 1230-1233.
[7] TOREYIN B U, CETIN A E. Online detection of fire in video [C]∥ IEEE Conference on Computer Vision and Pattern Recognition. Minneapolis: IEEE, 2007: 1-5.
[8] KO B C, CHEONG K H, NAM J Y. Fire detection based on vision sensor and support vector machines [J]. Fire Safety Journal, 2009, 44(3): 322-329.
[9] CELIK T, DEMIREL H. Fire detection in video sequences using a generic color model [J]. Fire Safety Journal, 2009, 44(2): 147-158.
[10] HO C C, KUO T H. Realtime videobased fire smoke detection system [C]∥ IEEE International Conference on Advanced Intelligent Mechatronics. Singapore: IEEE, 2009: 1845-1850.
[11] GUNAY O, TASDEMIR K, TOREYIN B U. Video based wild fire detection at night [J]. Fire Safety Journal, 2009, 44(6): 860-868.
[12] TOREYIN B U, CETIN A E. Wildfire detection using LMS based active learning [C]∥ International Conference on Acoustics Speech, and Signal Processing. Taipei: IEEE, 2009: 1461-1464.
[13] COLLINS R T, LIPTON A J, KANADE T, et al. A system for video surveillance and monitoring [R]. Pittsburgh: Robotics Institute, 1999.
[14] NISTER D, STEWENIUS H. Scalable recognition with a vocabulary tree [C]∥ Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2006: 2161-2168.
[15] SUZUKI S, ABE K. Topological structural analysis of digitized binary images by border following [J]. Computer Vision, Graphics, and Image Processing, 1985, 30(1):32-46.
[16] HARRIS C, STEPHENS M. A combined corner and edge detector [C]∥ Proceedings of the Alvey Vision Conference. Manchester: [s. n.], 1988: 147-151.
[17] BALABIN R M, LOMAKINA E I. Neural network approach to quantumchemistry data: accurate prediction of density functional theory energies [J]. Journal of Chemistry and Physics, 2009, 131(7): 74-104.
[18] BISHOP C M. Pattern recognition and machine learning [M]. New York: Springer, 2006: 232-241.

[1] YANH Yu-ting, SHI Yu-hui, XIA Shun-ren. Discussion mechanism based brain storm optimization algorithm[J]. J4, 2013, 47(10): 1705-1711.
[2] ZHU Xiao-en, HAO Xin, XIA Shun-ren. Feature selection algorithm based on Levy flight[J]. J4, 2013, 47(4): 638-643.
[3] SON Chang-il , ZHEN Shuai, XIA Shun-ren. Attractor range based affine registration of multi-modal
brain magnetic resonance images
[J]. J4, 2012, 46(9): 1722-1728.
[4] DAI Yuan-ming, WEI Wei, LIN Yi-ning. An improved Mean-shift tracking algorithm based on
color and texture feature
[J]. J4, 2012, 46(2): 212-217.
[5] Qi lei, JIN Wen-guang, GENG Wei-dong. Human motion capture using wireless inertial sensors[J]. J4, 2012, 46(2): 280-285.
[6] LIU Chen-bin, PAN Ying, ZHANG Hai-shi, HUANG Feng-ping, XIA Shun-ren. Detecting MGMT expression status of glioma with magnetic
resonance image
[J]. J4, 2012, 46(1): 170-176.
[7] QIAN Cheng, ZHANG San-yuan. Weighted incremental subspace learning algorithm
suitable for object tracking
[J]. J4, 2011, 45(12): 2240-2246.
[8] CAO Ying, HAO Xin, ZHU Xiao-en, XIA Shun-ren. Mammographic mass segmentation algorithm based on
automatic random walks
[J]. J4, 2011, 45(10): 1753-1760.
[9] LV Gu-lai,LI Jian-ping,LI Qiang,YU Li-xing,ZHU Song-ming,LOU Jian-zhong. Method for rootstock position recognition based on machine vision[J]. J4, 2011, 45(10): 1766-1770.
[10] LAI Xiao-bo , ZHU Shi-qiang. Mutual information based non-parametric
 transform stereo matching algorithm
[J]. J4, 2011, 45(9): 1636-1642.
[11] WANG Jin-de, SHOU Li-dan, LI Xiao-yan, CHEN Gang. Bundling features with multiple segmentations for
object-based image retrieval
[J]. J4, 2011, 45(2): 259-266.
[12] LIU Jian-ming, LU Dong-ming, GE Rong. Global optimization based image inpainting and
its implementation on GPU
[J]. J4, 2011, 45(2): 247-252.
[13] LIANG Wen-feng, XIANG Zhi-yu. Algorithm of robust object tracking using PTZ camera[J]. J4, 2011, 45(1): 59-63.
[14] ZHAN Jiang-tao, LIU Qiang, CHAI Chun-lei. Facial feature tracking using three-dimensional model and
Gabor wavelet
[J]. J4, 2011, 45(1): 30-36.
[15] SONG Kun-po, XIA Shun-ren, XU Qing. Algorithm considering correlation of wavelet coefficients for
ultrasound image denoising
[J]. J4, 2010, 44(11): 2203-2208.