Please wait a minute...
J4  2014, Vol. 48 Issue (3): 423-429    DOI: 10.3785/j.issn.1008-973X.2014.03.007
    
Video stabilization based on empirical mode decomposition and
several evaluation criterions
YU Jun1,2, WANG Zeng-fu1,2,3
1.National Laboratory of Speech and Language Information Processing, University of Science and Technology of China,
Hefei 230027, China;2.Department of Automation, University of Science and Technology of China, Hefei 230027, China;
3.Institute of Intelligent Machines, Chinese Academy of Sciences, Hefei 230031,China
Download:   PDF(0KB) HTML
Export: BibTeX | EndNote (RIS)      

Abstract  

In view of the probtem of video jitter in photograph, a real-time and robust video stabilization system was proposed. It has following characteristics: 1)feature points are matched between reference frame and current frame with Sift feature extraction, and global motion parameters are obtained  with random sample consensus (RANSAC) algorithm|2)motion compensation is applied to current frame with jittered parameters obtained by empirical mode decomposition and several evaluation criterions (global motion parameters and location errors of corresponding feature points); 3)stable and complete video is obtained after each frame is repaired with texture synthesis. Comparing between jitter video and stabilized video, the experimental results confirm that the system can increase the average peak signal-to-noise ratio around 7.2 dB in real-time, thus can increase the ability of identification and perceptive comfort on video content.



Published: 10 June 2018
CLC:  TP 18  
Cite this article:

YU Jun, WANG Zeng-fu. Video stabilization based on empirical mode decomposition and
several evaluation criterions. J4, 2014, 48(3): 423-429.

URL:

http://www.zjujournals.com/eng/10.3785/j.issn.1008-973X.2014.03.007     OR     http://www.zjujournals.com/eng/Y2014/V48/I3/423


基于经验模式分解和多种评价准则的电子稳像

针对摄影中易产生视频抖动的问题,提出一个实时鲁棒的视频去抖动系统.该系统具有如下特性:1)提取参考帧和当前帧的Sift特征点,并对它们进行匹配,通过随机采样一致(RANSAC)算法来得到全局运动参数;2)基于经验模式分解以及多种评价准则(全局运动参数和特征点对位置误差)来确定抖动参数以实现对当前帧的运动补偿;3)结合图像纹理合成算法来修复运动补偿后的视频帧,从而得到稳定和完整的输出视频.通过比较抖动视频和去抖动后的视频结果表明:该系统能够在保持实时性的同时提高视频的平均信噪比约7.2 dB,大大提高人对视频中内容的辨识度和观察舒适感.

[1] EJAZ N, WONIL K, SOON K, et al. Video stabilization by detecting intentional and unintentional camera motions[C]∥ International Conference on Intelligent Systems, Modelling and Simulation.New York: IEEE , 2012: 312-316.
[2] CHEN CHAO-HO, CHEN CHAO-YU, CHEN CHIN-HSING, et al. Real-time video stabilization based on vibration compensation by using feature block[J]. International Journal of Innovative Computing Information and Control, 2011,7(9):52855298.
[3] SEOK-JAE K, TAE-SHICK W, DAE-HWAN K, et al. Video stabilization based on motion segmentation[C]∥ IEEE International Conference on Consumer Electronics. New York: IEEE, 2012: 416-417.
[4] DUNG T V, LERTRATTANAPANICH S, CAO B T, et al. Real-time video stabilization with reduced temporal mismatch and low frame buffer[C]∥ IEEE International Conference on Consumer Electronics. New York: IEEE,2012:61-62.
[5] PUGLISI G, BATTIATO S. A robust image alignment algorithm for video stabilization purposes[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2011, 21(10):1390-1400.
[6] PUGLISI G, BATTIATO S. Robust video stabilization approach based on a voting strategy[C]∥ IEEE International Conference on Image Processing. New York: IEEE,2011, 629-632.
[7] ABRAHAM S C, THOMAS M R, BASHEER R, et al. A novel approach for video stabilization[C]∥ IEEE Recent Advances in Intelligent Computational Systems. New York: IEEE,2011: 134-137.
[8] VEON K L, MAHOOR M H, VOYLES R M. Video stabilization using SIFT-ME features and fuzzy clustering[C]∥ IEEE/RSJ International Conference on Intelligent Robots and Systems. New York: IEEE,2011: 2377-2382.
[9] WINDAU J, ITTI L. Multilayer real-time video image stabilization[C]∥ IEEE/RSJ International Conference on Intelligent Robots and Systems. New York: IEEE,2011: 2397-2402.
[10] HUANG Zhi-Yong, HE Fa-zhi, CAI Xian-tao, et al. A 2D-3D hybrid approach to video stabilization[C]∥ International Conference on Computer-Aided Design and Computer Graphics. New York: IEEE,2011:146-150.
[11] BERTALMIO M, BERTOZZI A L, SAPIRO G. NAVIER-STOKES,fluid dynamics, and image and video inpainting[C]∥ International Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2001: 355362.
[12] DRORI L, COHEN-OR D, YESHURUN H. Fragment-based image completion[J]. ACM Trans on Graphics(SIGGRAPH), 2003, 22(3): 303-312.
[13] JONES R C, DEMENTHON D, DOERMANN D S. Building mosaics from video using MPEG motion vectors[C]∥ Proceedings of the Seventh ACM International Conference on Multimedia(Part2). New York: ACM, 1999: 29-32.
[14] SHUM H Y, SZELISKI R. Construction of panoramic image mosaics with global and local alignment[J]. International Journal of Computer Vision, 2000, 16(1): 63-84.
[15] WEXLER Y, SHECHTMAN E, IRANI M. Space-time video completion[C]∥ Proceedings IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2004: 20127.
[16] JIA J, WU T, TAI Y, et al. Video repairing: Inference of foreground and background under severe occlusion[C]∥ Proceedings IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE,2004: 364-371.
[17] CHEUNG S-C S, ZHAO JIAN, VENKATESH M V. Efficient object-based video inpainting[C]∥ IEEE International Conference on Image Proeessing. New York: IEEE,2006: 705-708.
[18] MATSUSHITA Y, OFEK E, GE W-N, et al. Full-frame video stabilization with motion inpainting[J]. IEEE Transaction on Pattern Analysis and Machine Intelligence, 2006, 28(7): 1150-1163.
[19] CRIMINISI A, PEREZ P, TOYAMA K. Region filling and object removal by exemplar-based image inpainting[J]. IEEE Transactions on Image Processing, 2004, 13(9): 1200-1212.
[20] TANG F, YING Y T, WANG J. et al. A novel texture synthesis based algorithm for object removal in photographs[C]∥ Proceedings of 9th Asian Computing Seience Conference. New York: IEEE,2004: 248-258.
[21] 顾广华,姜思宇,王成儒. 基于 SIFT 匹配的全局运动估计方法[J]. 光学技术,2011,37(1):85-90.
GU Guang-hua, JIANG Si-yu, WANG Chen-gru. A global motion estimation algorithm based on SIFT matching [J]. Optical Technique, 2011, 37(1): 85-90.
[22] HUANG N E, SHEN Z, LONG S R, et al. The empirical mode decomposition and the Hilbert spectrum for nun-linear and non-stationary time series analysis[J]. Proc R Soc Lond A, 1998, 454: 903-995.
[23] JUANG C, CHEN T, CHENG W. Speedup of implementing fuzzy neural networks with high-dimensional inputs through parallel processing on graphic processing units[J]. IEEE Transactions on Fuzzy Systems, 2011, 19(4): 717-728.
[24] WU Chang-chang. SiftGPU: a GPU Implementation of Scale Invariant Feature Transform (SIFT)[EB/OL] . [2012-07-31]. http:∥cs.unc.edu/~ccwu/siftgpu/.

[1] LIU Ye-feng, XU Guan-qun, PAN Quan-ke, CHAI Tian-you. Magnetic material molding sintering production scheduling optimization method and its application[J]. J4, 2013, 47(9): 1517-1523.
[2] XIAO Dong-feng, YANG Chun-jie,SONG Zhi-huan. The forecasting model of blast furnace gas output
based on improved BP network
[J]. J4, 2012, 46(11): 2103-2108.
[3] WANG Xiu-jun, HU Xie-he. An improved control strategy of single neuron PID[J]. J4, 2011, 45(8): 1498-1501.
[4] CHEN Yi-Xi, YANG Xiao-Hu, YIN Ge-Ting. Migration of relation query based system to semantic web[J]. J4, 2010, 44(9): 1666-1671.