Please wait a minute...
Vis Inf  2018, Vol. 2 Issue (3): 181-189    DOI: 10.1016/j.visinf.2018.09.004
论文     
多属性可视嵌入的组合优化
Qiong Zenga, Wenzheng Chenb, Zhuo Hanc, Mingyi Shia, Yanir Kleimand, Daniel Cohen-Ore, Baoquan Chena,Yangyan Lia
aSchool of Computer Science & Technology, Shandong University, Qingdao, China  bDepartment of Computer Science, University of Toronto, Toronto, Canada cViterbi School of Engineering, University of Southern California, CA, United States  dDNEG Visual Effects, London, United Kingdom  eSchool of Computer Science, Tel Aviv University, Tel Aviv, Israel
Group optimization for multi-attribute visual embedding
Qiong Zenga, Wenzheng Chenb, Zhuo Hanc, Mingyi Shia, Yanir Kleimand, Daniel Cohen-Ore, Baoquan Chena,Yangyan Lia
aSchool of Computer Science & Technology, Shandong University, Qingdao, China  bDepartment of Computer Science, University of Toronto, Toronto, Canada cViterbi School of Engineering, University of Southern California, CA, United States  dDNEG Visual Effects, London, United Kingdom  eSchool of Computer Science, Tel Aviv University, Tel Aviv, Israel
 全文: PDF 
摘要: 背景:理解图像之间的语义相似度是许多计算机图形和视觉应用的核心。因为感知图像时可能关注于不同的属性,图像的视觉语义通常是多义的。  创新:本文提出了一种学习图像之间语义的视觉相似度的方法,它能推断出图像的潜在属性并将它们嵌入到与每一潜在属性相关联的多个空间中。我们将多属性、多空间嵌入问题视为一个在众包聚类时度量各属性嵌入距离的优化函数。算法的关键是收集在聚类中成对的具有相同属性的定性化元组并嵌入到相应空间中。为了确保在多种测度之间能共享相似性属性,图像聚类的过程交由用户控制。算法将收集好的图像聚类转换为多个元组集,然后输入到组合优化算法中,对属性相似度和多属性嵌入进行推理。我们的多属性嵌入允许在不同的属性空间中检索相似的对象。实验结果表明,我们的方法优于面向各种数据集的最先进的多属性、多空间嵌入方法,实验结果演示了多属性嵌入在图像检索中的应用。
关键词: 嵌入语义相似度视觉检索     
Abstract: Understanding semantic similarity among images is the core of a wide range of computer graphics and computer vision applications. However, the visual context of images is often ambiguous as images can be perceived with emphasis on different attributes. In this paper, we present a method for learning the semantic visual similarity among images, inferring their latent attributes and embedding them into multi-spaces corresponding to each latent attribute. We consider the multi-embedding problem as an optimization function that evaluates the embedded distances with respect to qualitative crowdsourced clusterings. The key idea of our approach is to collect and embed qualitative pairwise tuples that share the same attributes in clusters. To ensure similarity attribute sharing among multiple measures, image classification clusters are presented to, and solved by users. The collected image clusters are then converted into groups of tuples, which are fed into our group optimization algorithm that jointly infers the attribute similarity and multi-attribute embedding. Our multi-attribute embedding allows retrieving similar objects in different attribute spaces. Experimental results show that our approach outperforms state-of-the-art multi-embedding approaches on various datasets, and demonstrate the usage of the multi-attribute embedding in image retrieval application.
Key words: Embedding    Semantic similarity    Visual retrieval
出版日期: 2018-11-05
服务  
把本文推荐给朋友
加入引用管理器
E-mail Alert
RSS
作者相关文章  
Qiong Zeng
Wenzheng Chen
Zhuo Han
Mingyi Shi
Yanir Kleiman
Daniel Cohen-Or
Baoquan Chen
Yangyan Li

引用本文:

Qiong Zeng, Wenzheng Chen, Zhuo Han, Mingyi Shi, Yanir Kleiman, Daniel Cohen-Or, Baoquan Chen, Yangyan Li. Group optimization for multi-attribute visual embedding. Vis Inf, 2018, 2(3): 181-189.

链接本文:

http://www.zjujournals.com/vi/CN/10.1016/j.visinf.2018.09.004        http://www.zjujournals.com/vi/CN/Y2018/V2/I3/181

No related articles found!