An image representation method based on the similarity of feature points
HE Jing1,2, LIU Renyi1,2, ZHANG Feng1,2, DU Zhenhong1,2, CHEN Yongpei1,2
1. Zhejiang Provincial Key Lab of GIS, Zhejiang University, Hangzhou 310028, China;
2. Department of Geographic Information Science, Zhejiang University, Hangzhou 310027, China
Abstract:To overcome the shortcoming of the Spatial Pyramid Matching (SPM) approach, which lacks invariance to translation, scale and rotation of visual objects in images, this paper proposes an image representation method based on the similarity of feature points. Firstly, it filters the rough matching result of bag-of-words by some properties including the topological similarity, the directional similarity and the distance similarity. Then, it adjusts the division of the image sub-regions according to the standard deviation ellipse center and the rotation angle of the feature points. Finally, the representation of anti-rotation, anti-translation and anti-scaling of image can be obtained. Experiments have been conducted by applying the proposed method to the campus building dataset and the object image dataset. It indicates that our method significantly improves the classification accuracy and recall ratio, especially for the dataset containing images with obvious rotation, translation and scaling transforms.
何敬, 刘仁义, 张丰, 杜震洪, 陈永佩. 基于特征点群相似度计算模型的图像表示方法[J]. 浙江大学学报(理学版), 2017, 44(5): 599-605.
HE Jing, LIU Renyi, ZHANG Feng, DU Zhenhong, CHEN Yongpei. An image representation method based on the similarity of feature points. Journal of ZheJIang University(Science Edition), 2017, 44(5): 599-605.
[1] NISTER D,STEWENIUS H.Scalable recognition with a vocabulary tree[C]//IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Washington:IEEE Computer Society,2006,2(10):2161-2168.
[2] LAZEBNIK S,SCHMID C,PONCE J.Beyond bags of features:Spatial pyramid matching for recognizing natural scene categories[C]//IEEE Computer Society Conference on Computer Vision & Pattern Recognition.Washington:IEEE Computer Society,2006:2169-2178.
[3] CALONDER M,LEPETIT V,OZUYSAL M,et al.BRIEF:Computing a local binary descriptor very fast[J].IEEE Transactions on Pattern Analysis & Machine Intelligence,2011,34(7):1281-1298.
[4] RUBLEE E,RABAUD V,KONOLIGE K,et al.ORB:An efficient alternative to SIFT or SURF[C]//IEEE International Conference on Computer Vision. Washington:IEEE,2012:2564-2571.
[5] ALAHI A,ORTIZ R,VANDERGHEYNST P.FREAK:Fast retina keypoint[C]//IEEE Conference on Computer Vision andPattern Recognition, Washington:IEEE,2012:510-517.
[6] 索春宝,杨东清,刘云鹏,等.多种角度比较SIFT、SURF、BRISK、ORB、FREAK算法[J]. 北京测绘,2014(4):23-26. SUO C B, YANG D Q, LIU Y P. Comparing SIFT, SURF, BRISK, ORB and FREAK in some different perspectives[J].Beijing Surveying and Mapping,2014(4):23-26.
[7] 陈赟,沈一帆.基于词汇树的图片搜索[J]. 计算机工程,2010,36(6):189-191. CHEN Y,SHEN Y F. Image search based on vocabulary tree[J].Computer Engineering,2010,36(6):189-191.
[8] TRZCINSKI T,LEPETIT V,FUA P.Thick boundaries in binary space and their influence on nearest-neighbor search[J].Pattern Recognition Letters,2011,33(16):2173-2180.
[9] 张运超,陈靖,王涌天,等.基于移动增强现实的智慧城市导览[J]. 计算机研究与发展,2014,51(2):302-310. ZHANG Y C,CHEN J,WANG Y T,et al.Smart city guide using mobile augmented reality[J].Journal of Computer Research and Development,2014,51(2):302-310.
[10] 刘涛. 空间群(组)目标相似关系及计算模型研究[D].武汉:武汉大学,2011. LIU T.Similarity of Spatial Group Objects[D].Wuhan:Wuhan University,2011.
[11] 刘涛,杜清运,闫浩文.空间点群目标相似度计算[J]. 武汉大学学报:信息科学版,2011,36(10):1149-1153. LIU T,DU Q Y,YAN H W.Spatial similarity assessment of point clusters[J].Geomatics and Information Science of Wuhan University,2011,36(10):1149-1153.
[12] 石丹丹. 基于特征点空间关系的图像检索技术研究[D].南昌:南昌航空大学,2013. SHI D D. Image Retrieval Technology Research based on the Spatial Relations of Feature Points[D].Nanchang:Nanchang Hangkong University,2013.
[13] 朱道广,郭志刚,赵永威,等.基于空间上下文加权词汇树的图像检索方法[J]. 模式识别与人工智能,2013,26(11):1050-1056. ZHU D G,GUO Z G,ZHAO Y W.Image retrieval with spatial context weighting based vocabulary tree[J].Pattern Recognition and Artificial Intelligence,2013,26(11):1050-1056.