Please wait a minute...
Journal of Zhejiang University (Science Edition)  2019, Vol. 46 Issue (3): 270-278    DOI: 10.3785/j.issn.1008-9497.2019.03.002
Culturd Computation     
Synthesis of embroidery based on convolutional neural network
Rui ZHENG1, Wenhua QIAN1,2, Dan XU1, Yuanyuan PU1
1.Department of Computer Science and Engineering, Yunnan University, Kunming 650504, China
2.School of Automation,Southeast University, Nanjing 210096,China
Download: HTML (   PDF(5493KB)
Export: BibTeX | EndNote (RIS)      

Abstract  To remedy the deficiency of the current digital embroidery algorithm in arousing the original linear and stereo perception, this paper proposes an artificial system based on deep-learning and convolutional neural network to synthetize the style of embroidery. First, we input the content image and the embroidery style image, perform image semantic segmentation which is based on conditional random field to separate the foreground and the background of both images and construct masks by image binarization. Then, we convert the color spaces of both input images from RGB into YIQ, extract the features of embroidery by VGG19 and transfer the content image into embroidery style in the foreground by using mask, meanwhile emphasizing the gorgeous colors and the stereoscopic textures of the embroidery. Experimental results show that the proposed method can enhance images with embroidery style effectively. It lays a foundation for digital inheritance of the traditional embroidery.

Key wordsembroidery      convolutional neural network      image semantic segmentation      mask      style transfer     
Received: 06 January 2019      Published: 25 May 2019
CLC:  TP391.7  
Cite this article:

Rui ZHENG, Wenhua QIAN, Dan XU, Yuanyuan PU. Synthesis of embroidery based on convolutional neural network. Journal of Zhejiang University (Science Edition), 2019, 46(3): 270-278.

URL:

https://www.zjujournals.com/sci/EN/Y2019/V46/I3/270


基于卷积神经网络的刺绣风格数字合成

针对刺绣风格数字化模拟方法立体感不强、缺少线条方向等问题,提出了一种基于深度学习和卷积神经网络的算法,将刺绣艺术风格传输到目标图像。利用图像语义分割网络及风格迁移网络,分别对目标内容图像与刺绣艺术风格图像进行目标提取和风格迁移。首先,输入目标内容图像与刺绣艺术风格图像,采用基于条件随机场的图像语义分割,将目标内容图与刺绣艺术风格图的前景与背景分离,并进行二值化处理,形成掩模图像;其次,将目标内容图与刺绣艺术风格图的RGB颜色空间转换为YIQ;最后,参照掩模图像使用VGG19网络模型提取目标内容图的内容特征及刺绣艺术风格图的风格纹理特征进行目标区域内的风格迁移,从而对刺绣艺术进行数字化模拟。该算法能模拟出具有刺绣艺术效果的结果图像,能更好地模拟真实刺绣艺术的线条方向,突出了线条的立体感。通过使用语义分割与风格迁移相结合的方法,有效模拟了色彩艳丽、立体感强的刺绣艺术风格图像,是对非真实感绘制的有效补充,为刺绣数字化保护与非物质文化传承奠定了基础。

关键词: 刺绣,  卷积神经网络,  图像语义分割,  掩模,  风格迁移 
1 WINKENBACHG, SALESIND H. Computer-generated pen-and-ink illustration[C]//Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques. New York:ACM, 1994: 91-100.doi:10.1145/192161.192184
2 LIY J, FANGC, YANGJ M, et al. Universal style transfer via feature transforms[C]//Advances in Neural Information Processing Systems. California:NIPS,2017: 386-396.
3 GUAYM, RONFARDR, GLEICHERM L. Space-time sketching of character animation[J]. Acm Transactions on Graphics, 2015, 34(4):1-10.doi:10.1145/2766893
4 SHENGK, DONGW, KONGY, et al. Evaluating the quality of face alignment without ground truth[J]. Computer Graphics Forum, 2015, 34(7):213-223.doi:10.1111/cgf.12760
5 QIANW H, XUD,GUANZ, et al. Simulating chalk art style painting[J]. Journal of Image and Graphics,2017,22(5):620-630.
6 QIANW H, XUD, YUEK, et al. Gourd pyrography art simulating based on non-photorealistic rendering[J]. Multimedia Tools & Applications, 2017, 76(13):14559-14579.doi:10.1007/s11042-016-3801-8
7 YANGL J, XUT C,WUE H. Animating strokes in drawing process of Chinese ink painting[J]. Journal of Computer-Aided Design and Computer Graphics,2016,28(5):742-749.
8 CHENS G, SUNZ X, XIANGJ H, et al. Research on the technology of computer aided irregular needling embroidery[J]. Chinese Journal of Computers,2011,34(3):526-538.doi:10.3724/sp.j.1016.2011.00526
9 ZHOUJ, SUNZ X,YANGK W, et al. Parametric generation method for irregular needling embroidery rendering[J]. Journal of Computer-Aided Design and Computer Graphics,2014,26(3):436-444.
10 XIANGJ H, YANGK W, ZHOUJ ,et al. A novel image disintegration-based computerized embroidery method for random stitch[J]. Journal of Graphics,2013,34(4):16-23.
11 YANGK W, SUNZ X. Vector and pixel space depended stitch definition and style transfer for random-needle embroidery[J]. Journal of Computer-Aided Design & Computer Graphics,2018,30(5):778-790.
12 ZHANGP, LIANGZ H. Tradition embroidery comparing with computer embroidery art characteristic[J]. Art and Design,2010(7):229-231.
13 LECUNY L, BOTTOUL, BENGIOY, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278-2324.doi:10.1109/5.726791
14 KRIZHEVSKYA, SUTSKEVERI, HINTONG E. Imagenet classification with deep convolutional neural networks[C]//Advances in Neural Information Processing Systems. Nevada,NIPS, 2012: 1097-1105.doi:10.1145/3065386
15 SIMONYANK, ZISSERMANA. Very deep convolutional networks for large-scale image recognition[J]. arXiv:1409.1556, 2014.
16 ZEILERM D, FERGUSR. Visualizing and understanding convolutional networks[C]//European Conference on Computer Vision. Zurich: Springer Verlag, 2014: 818-833.doi:10.1007/978-3-319-10590-1_53
17 GATYSL A, ECKERA S, BETHGEM. Texture synthesis and the controlled generation of natural stimuli using convolutional neural networks[C]//International Conference on Neural Information Processing Systems. Heidelberg:Bernstein Conference,2015.
18 GATYSL A, ECKERA S, BETHGEM. A neural algorithm of artistic style[J]. Nature Communications, arXiv:1508.0657,2015.doi:10.1167/16.12.326
19 ULYANOVD, LEBEDEVV, VEDALDIA, et al. Texture networks: Feed-forward synthesis of textures and stylized images[C]//Proceedings of the 33rd International Conference on Machine Learning. New York: ICML, 2016: 1349-1357.
20 JOHNSONJ, ALAHIA, LIF F. Perceptual losses for real-time style transfer and super-resolution[C]//European Conference on Computer Vision.Amsterdam: Springer, 2016: 694-711.
21 CHENT Q, SCHMIDTM. Fast patch-based style transfer of arbitrary style[J]. Computer Vision and Pattern Recognition,arXiv :1612.04337, 2016.
22 LUANF J, PARISS , SHECHTMANE , et al. Deep photo style transfer[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu:IEEE Computer Society, 2017: 6997-7005.doi:10.1109/cvpr.2017.740
23 OTSUN. A threshold selection method from gray-level histograms[J]. IEEE Transactions on Systems, Man, and Cybernetics, 2007, 9(1):62-66.doi:10.1109/tsmc.1979.4310076
24 JIAOS, LIX, LUX . An improved Ostu method for image segmentation[C]// International Conference on Signal Processing Proceeding. Beijing: IEEE, 2007.doi:10.1109/icosp.2006.345705
25 SHIJ, MALIKJ. Normalized cuts and image segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(8): 888-905.doi:10.1109/cvpr.1997.609407
26 ROTHERC, KOLMOGOROVV, BLAKEA. Grabcut: Interactive foreground extraction using iterated graph cuts[J].ACM Transactions on Graphics (TOG), 2004, 23(3): 309-314.
27 LONGJ, SHELHAMERE, DARRELLT. Fully convolutional networks for semantic segmentation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Boston:IEEE Computer Society, 2015: 3431-3440.doi:10.1109/tpami.2016.2572683
28 ZHENGS, JAYASUMANAS, ROMERA-PAREDESB, et al. Conditional random fields as recurrent neural networks[C]//Proceedings of the 2015 IEEE International Conference on Computer Vision. Washington DC:IEEE Computer Society, 2015: 1529-1537.doi:10.1109/iccv.2015.179
29 ARNABA, JAYASUMANAS, ZHENGS, et al. Higher order conditional random fields in deep neural networks[C]//European Conference on Computer Vision. Amsterdam: Springer, 2016: 524-540.doi:10.1007/978-3-319-46475-6_33
30 EVERINGHAMM, GOOL LVAN, WILLIAMSC K I, et al. The pascal visual object classes (voc) challenge[J]. International Journal of Computer Vision, 2010, 88(2): 303-338.
31 SCHWARZM W, COWANW B, BEATTYJ C. An experimental comparison of RGB, YIQ, LAB, HSV, and opponent color models[J]. ACM Transactions on Graphics, 1987, 6(2): 123-158.doi:10.1145/31336.31338
32 GATYSL A, ECKERA S, BETHGEM, et al. Controlling perceptual factors in neural style transfer[C]//IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Honolulu: IEEE Computer Society,2017:3730-3738.doi:10.1109/cvpr.2017.397
33 CHAMPANDARDA J. Semantic style transfer and turning two-bit doodles into fine artworks[J]. Semantic Style Transfer and Turning Two-Bit Poodles into Fine Artworks,arXiv :1603.01768, 2016.
[1] Shanxiong CHEN, Xiaolong WANG, Xu HAN, Yun LIU, Minggui WANG. A recognition method of Ancient Yi character based on deep learning[J]. Journal of Zhejiang University (Science Edition), 2019, 46(3): 261-269.
[2] LIU Yujie, PANG Yunping, LI Zongmin, LI Hua. Sketch based image retrieval based on abstract-level transform and convolutional neural networks[J]. Journal of Zhejiang University (Science Edition), 2016, 43(6): 657-663.