Please wait a minute...
Journal of Zhejiang University-SCIENCE A (Applied Physics & Engineering)  2007, Vol. 8 Issue (8): 1218-1226    DOI: 10.1631/jzus.2007.A1218
Information Science     
Stepwise approach for view synthesis
CHAI Deng-feng, PENG Qun-sheng
State Key Lab of CAD and CG, Zhejiang University, Hangzhou 310027, China; Institute of Spatial Information Technique, Zhejiang University, Hangzhou 310027, China
Download:     PDF (0 KB)     
Export: BibTeX | EndNote (RIS)      

Abstract  This paper presents some techniques for synthesizing novel view for a virtual viewpoint from two given views captured at different viewpoints to achieve both high quality and high efficiency. The whole process consists of three passes. The first pass recovers depth map. We formulate it as pixel labelling and propose a bisection approach to solve it. It is accomplished in log2n (n is the number of depth levels) steps, each of which involves a single graph cut computation. The second pass detects occluded pixels and reasons about their depth. It fits a foreground depth curve and a background depth curve using depth of nearby foreground and background pixels, and then distinguishes foreground and background pixels by minimizing a global energy, which involves only one graph cut computation. The third pass finds for each pixel in the novel view the corresponding pixels in the input views and computes its color. The whole process involves only a small number of graph cut computations, therefore it is efficient. And, visual artifacts in the synthesized view can be removed successfully by correcting depth of the occluded pixels. Experimental results demonstrate that both high quality and high efficiency are achieved by the proposed techniques.

Key wordsView synthesis      Occlusion      Graph cut     
Received: 31 December 2006     
CLC:  TP391  
Cite this article:

CHAI Deng-feng, PENG Qun-sheng. Stepwise approach for view synthesis. Journal of Zhejiang University-SCIENCE A (Applied Physics & Engineering), 2007, 8(8): 1218-1226.

URL:

http://www.zjujournals.com/xueshu/zjus-a/10.1631/jzus.2007.A1218     OR     http://www.zjujournals.com/xueshu/zjus-a/Y2007/V8/I8/1218

[1] Xuan-he WANG, Ji-lin LIU. Tracking multiple people under occlusion and across cameras using probabilistic models[J]. Journal of Zhejiang University-SCIENCE A (Applied Physics & Engineering), 2009, 10(7): 985-996.
[2] Kai LUO, Dong-xiao LI, Ya-mei FENG, Ming ZHANG. Depth-aided inpainting for disocclusion restoration of multi-view images using depth-image-based rendering[J]. Journal of Zhejiang University-SCIENCE A (Applied Physics & Engineering), 2009, 10(12): 1738-1749.
[3] Yi-xiong ZHANG, Wei-dong WANG, Peng LIU, Qing-dong YAO. Frame rate up-conversion using multiresolution critical point filters with occlusion refinement[J]. Journal of Zhejiang University-SCIENCE A (Applied Physics & Engineering), 2008, 9(12): 1621-1630.
[4] RUAN Xiao-yu, ZHANG Hui, YONG Jun-hai. Visible region extraction from a sequence of rational Bézier surfaces[J]. Journal of Zhejiang University-SCIENCE A (Applied Physics & Engineering), 2006, 7(7 ): 17-.