浙江大学学报(工学版), 2022, 56(9): 1824-1832 doi: 10.3785/j.issn.1008-973X.2022.09.016

计算机与控制工程

响应和滤波器偏差感知约束的无人机目标跟踪算法

王海军,, 张圣燕, 杜玉杰

滨州学院 山东省高校航空信息与控制重点实验室,山东 滨州 256603

UAV object tracking algorithm based on response and filter deviation-aware regularization

WANG Hai-jun,, ZHANG Sheng-yan, DU Yu-jie

Key Laboratory of Aviation Information and Control in University of Shandong, Binzhou University, Binzhou 256603, China

收稿日期: 2021-08-6  

基金资助: 山东省自然科学基金资助项目(ZR2020MF142, ZR2019PF021);滨州学院博士启动基金资助项目(2021Y04);滨州学院重大科研基金资助项目(2019ZD03);滨州学院社会服务基金资助项目(BZXYSFW201805)

Received: 2021-08-6  

Fund supported: 山东省自然科学基金资助项目(ZR2020MF142,ZR2019PF021);滨州学院博士启动基金资助项目(2021Y04);滨州学院重大科研基金资助项目(2019ZD03);滨州学院社会服务基金资助项目(BZXYSFW201805)

作者简介 About authors

王海军(1980—),男,副教授,从事目标跟踪算法研究.orcid.org/0000-0003-2481-9662.E-mail:whjlym@163.com , E-mail:whjlym@163.com

摘要

针对无人机视觉跟踪任务中目标外观变化大、视野角度多变问题,提出基于响应和滤波器偏差感知约束的无人机实时目标跟踪算法. 该算法根据视频帧间响应差和滤波器变化的一致性,通过建模前后帧响应差和滤波器的变化,建立基于响应偏差感知和帧间滤波器偏差约束机制的目标函数,学习目标的外观变化和滤波器的帧间变化. 引入辅助变量构建优化函数,采用交替方向乘子法(ADMM)将计算目标问题转化为求相关滤波器和辅助变量的最优解. 采用跟踪准确度和成功率指标,将所提算法与其他9种算法在DTB70、UAV123@10 fps和UAVDT等3个无人机视频数据库上进行对比实验. 实验结果表明,所提算法对遮挡、形变、角度变化等干扰属性均具有良好的鲁棒性,跟踪平均速度达到39.0帧/s,能够有效跟踪无人机目标.

关键词: 无人机 (UAV) ; 相关滤波 ; 视觉目标跟踪 ; 响应偏差感知约束 ; 滤波器偏差

Abstract

A real-time unmanned aerial vehicle (UAV) object tracking algorithm based on the response and filter deviation-aware regularization was proposed, aiming at the problem that targets were easily subject to the huge variation of appearance and various change of viewpoint interference in the UAV sequences. According to the consistency of response and correlation filter difference between video frames, the variation of correlation response and filter difference were modeled. Furthermore, an objective function with constraint scheme was constructed, which can learn variation of object appearance and filter. Meanwhile, an auxiliary variable based on the response and filter deviation-aware regularization was introduced to build an optimization function and alternating direction method of multipliers (ADMM) was used to optimize the solution of the correlation filter and auxiliary variable. To validate the effectiveness of the proposed algorithm, comparison experiments with other 9 algorithms were performed on three UAV tracking benchmarks, including DTB70、UAV123@10 fps and UAVDT, in terms of precision and success rate. Experimental results show that the proposed algorithm has good robustness for occlusion, deformation and view variation and can effectively track the target with an average speed of 39.0 frames of second.

Keywords: unmanned aerial vehicle (UAV) ; correlation filter ; visual object tracking ; response deviation-aware regularization ; filter deviation

PDF (805KB) 元数据 多维度评价 相关文章 导出 EndNote| Ris| Bibtex  收藏本文

本文引用格式

王海军, 张圣燕, 杜玉杰. 响应和滤波器偏差感知约束的无人机目标跟踪算法. 浙江大学学报(工学版)[J], 2022, 56(9): 1824-1832 doi:10.3785/j.issn.1008-973X.2022.09.016

WANG Hai-jun, ZHANG Sheng-yan, DU Yu-jie. UAV object tracking algorithm based on response and filter deviation-aware regularization. Journal of Zhejiang University(Engineering Science)[J], 2022, 56(9): 1824-1832 doi:10.3785/j.issn.1008-973X.2022.09.016

目标跟踪是计算机视觉领域重要的研究课题. 随着无人机的大量普及,目标跟踪技术被广泛应用在无人机平台上[1-4]. 由于无人机飞行高度较高,飞行姿态、速度经常发生变化,导致获取的地面目标在视频中所占面积较小,并且如尺度变化、遮挡、角度多变等影响跟踪性能的状况时有发生,这些都使得设计出鲁棒的、适合无人机高空平台对地面目标的快速跟踪算法,仍然是具有挑战性的任务. 基于相关滤波的跟踪算法[5-7]具有跟踪速度快,效率高的特点,被广泛应用在工业领域. Bolme等[8]将相关滤波理论引入目标跟踪领域,跟踪速度达到上百帧每秒. 该算法只提取目标的灰度特征,因此跟踪效果一般. Henriques等[9]提出核相关滤波目标跟踪算法,并引入Hog特征表征目标,提升了跟踪性能. 该算法中负样本由正样本循环移位得到,因此存在边界效应,导致在复杂场景下跟踪效果不佳. 为了缓解边界效应的不利影响,Galoogahi等[10]使用真实移位产生负样本,采用真实样本训练滤波器,同时扩大搜索区域,改善了跟踪效果,但是当目标外观变化较大时,该算法容易跟丢目标. Li等[11]在Galoogahi等[10]的研究基础上,引入时间和空间正则化,有效克服部分遮挡,较好地提升了算法的跟踪性能,但是当目标遇到较大形变时,该算法容易产生漂移,跟丢目标. 为了进一步提升相关滤波跟踪算法的性能,许多算法提取深度特征表征目标[12-13]. 虽然改善了跟踪效果,但是基于深度特征的相关滤波算法跟踪速度较慢,不适合无人机平台的计算资源.

本研究提出基于响应和滤波器偏差感知约束(response and filter deviation-aware regularization,RDAR)的无人机目标实时跟踪算法,通过建模前后帧响应和滤波器偏差约束,有效缓解前后帧较大形变引起的目标漂移,增强滤波器的判别能力. 采用交替方向乘子法对相关滤波器和响应偏差约束参数进行优化迭代求解,有效降低算法计算量. 在多个无人机公开数据集上进行性能评估,将本研究所提跟踪算法与具有代表性的相关滤波跟踪算法对比.

1. 背景感知相关滤波跟踪算法

为了解决传统相关滤波跟踪算法中由于循环移位产生的边界效应,Galoogahi等[10]采用真实移位来产生负样本并扩大搜索区域,建立的目标函数为

$\left.\begin{split} &E\left( {{{f}}_t} \right) = \dfrac{1}{2}\displaystyle \sum\limits_{d = 1}^D {\left\| {{\boldsymbol{y}} - {\boldsymbol{x}}_t^d * \left( {{{\boldsymbol{P}}^{\text{T}}}{\boldsymbol{f}}_t^d} \right)} \right\|_2^2} +\frac{\lambda }{2}\displaystyle \sum\limits_{d = 1}^D {\left\| {{\boldsymbol{f}}_t^d} \right\|_2^2} ;\\ &d = 1,2, \cdots ,D. \end{split}\right\}$

式中: $ {f}_t$为第t帧多通道滤波器 $ {\boldsymbol{f}}_t^d $的集合, ${\boldsymbol{y}}$为理想的训练标签, \boldsymbol{f} $ {\boldsymbol{x}}_t^d $$ {\boldsymbol{f}}_t^d $分别为第 $ t $帧第 $ d $通道的特征和滤波器, $ D $为总通道数, ${\boldsymbol{P}}$为裁剪矩阵, $ \lambda $为正则化系数, $ * $为时域卷积. 为了提高计算效率,将式(1)转为频域,得到

$ \left. {\begin{array}{l}E\left({f}_{t},{\widehat{{\boldsymbol{g}}}}_{t}\right)=\dfrac{1}{2}{\displaystyle \sum _{d=1}^{D}{\Vert \widehat{{\boldsymbol{y}}}-{\widehat{{\boldsymbol{g}}}}^{d}{\widehat{{\boldsymbol{x}}}}_{t}^{d}\Vert }_{2}^{2}+\dfrac{\lambda }{2}{\Vert {{\boldsymbol{f}}}_{t}^{d}\Vert }_{2}^{2}}\text{,}\\ \text{s}.\text{t}.\;{g}_{t}^{d}=\sqrt{H}\left({\boldsymbol{F}}{{\boldsymbol{P}}}^{\text{T}}\otimes {\boldsymbol{I}}\right){{\boldsymbol{f}}}_{t}^{d}.\end{array}} \right\} $

式中:矩阵FH维向量映射到傅里叶变换域, $ \otimes $为克罗内克积,I为单位矩阵,上标 $^ \wedge $表示离散傅里叶变换操作符号. 当新的视频帧到来时,响应图 $ {{\boldsymbol{R}}_{t+1}} $由第 $ t+1 $帧裁剪得到的图像块 $ {{\boldsymbol{x}}_{t+1}} $、第 $ t $帧得到的滤波器卷积得到,即

$ {{\boldsymbol{R}}_{t+1}} = \displaystyle \sum\limits_{d = 1}^D {{\boldsymbol{x}}_{t+1}^d * {\boldsymbol{f}}_t^d} . $

响应值最大的位置也就是跟踪目标的位置.

尽管背景感知跟踪算法较好地解决了传统相关滤波算法中存在的边界效应,但是在无人机目标跟踪应用中,该算法不能有效建模目标的外观变化,因此当目标发生较大的外观变化时,算法容易跟丢目标.

2. 算法描述

2.1. 响应偏差感知约束项

为了解决背景感知跟踪算法的缺陷,本研究在Galoogahi等[10]的研究基础上引入响应偏差感知约束项,通过响应偏差学习目标的外观变化情况,实现无人机对地目标的鲁棒跟踪. 前后帧响应偏差定义为

$ {E_1} = \dfrac{\gamma }{2}\displaystyle \sum\limits_{d = 1}^D {\left\| {\Delta {\boldsymbol{R}}_t^d - \Delta {\boldsymbol{R}}_{t - 1}^d} \right\|_2^2} . $

式中: $ \gamma $为约束参数, $ \Delta {\boldsymbol{R}}_t^d $为第 $ t $帧和第 $ t - 1 $帧在第 $ d $通道获得的响应图差,定义为

$ \Delta {\boldsymbol{R}}_t^d = M\left( {{\boldsymbol{R}}_t^d} \right) - M\left( {{\boldsymbol{R}}_{t - 1}^d} \right). $

同理 $ \Delta {\boldsymbol{R}}_{t - 1}^d = M\left( {{\boldsymbol{R}}_{t - 1}^d} \right) - M\left( {{\boldsymbol{R}}_{t - 2}^d} \right) $,表示第 $ t- 1 $帧和第 $ t-2 $帧在第 $ d $通道获得的响应图差,重写式(4)为

$ {E_1} = \dfrac{\gamma }{2}\displaystyle \sum\limits_{d = 1}^D {\left\| {M\left( {{\boldsymbol{R}}_t^d} \right) - 2M\left( {{\boldsymbol{R}}_{t - 1}^d} \right)+M\left( {{\boldsymbol{R}}_{t-2}^d} \right)} \right\|_2^2} . $

式中: $ M(·) $为移位操作符[14],用来将响应图的最大值移到与训练标签 $ {\boldsymbol{y}} $最大值对应的位置处.

2.2. 帧间滤波器偏差约束项

受文献[11]启发,大多数场景中由前后视频帧计算得到的相关滤波变化不大,因此引入帧间滤波器偏差约束项,对相关滤波进行时域约束

$ {E_2} = \dfrac{\alpha }{{\text{2}}}\displaystyle \sum\limits_{d = 1}^D {\left\| {{\boldsymbol{f}}_t^d - {\boldsymbol{f}}_{t- 1}^d} \right\|_2^2} . $

式中: $ \alpha $为约束参数, $ {\boldsymbol{f}}_t^d $${\boldsymbol{f}}_{t- 1}^d$分别为第 $ t $帧和第 $t-1$帧中第 $ d $通道的滤波器.

2.3. 响应偏差感知和帧间滤波器偏差约束的跟踪算法

引入响应偏差感知和帧间滤波器偏差约束项,将式(1)改写为

$ \begin{split} E\left( {{{f}_t}} \right) =& \dfrac{1}{2}\displaystyle \sum\limits_{d = 1}^D {\left\| {{\boldsymbol{y}} - {\boldsymbol{x}}_t^d * \left( {{{\boldsymbol{P}}^{\text{T}}}{\boldsymbol{f}}_t^d} \right)} \right\|_2^2} +\dfrac{\lambda }{2}\displaystyle \sum\limits_{d = 1}^D {\left\| {{\boldsymbol{f}}_t^d} \right\|_2^2} +\\ & \dfrac{\alpha }{2}\displaystyle \sum\limits_{d = 1}^D {\left\| {{\boldsymbol{f}}_t^d - {\boldsymbol{f}}_{t - 1}^d} \right\|_2^2} +\frac{\gamma }{2}\displaystyle \sum\limits_{d = 1}^D {\left\| {\Delta {\boldsymbol{R}}_t^d - \Delta {\boldsymbol{R}}_{t - 1}^d} \right\|_2^2} . \end{split} $

引入辅助因子 $ {\boldsymbol{g}}_k^d = {{\boldsymbol{P}}^{\text{T}}}{\boldsymbol{f}}_t^d $,将式(8)改写为

$ \begin{split} E\left( {{{f}_t},{{\boldsymbol{g}}_t}} \right)=& \frac{1}{2}\displaystyle \sum\limits_{d = 1}^D {\left\| {{\boldsymbol{y}} - {\boldsymbol{x}}_t^d * {\boldsymbol{g}}_t^d} \right\|_2^2} +\frac{\lambda }{2}\displaystyle \sum\limits_{d = 1}^D {\left\| {{\boldsymbol{f}}_t^d} \right\|_2^2} {\text+} \\ & \frac{\alpha }{2}\displaystyle \sum\limits_{d = 1}^D {\left\| {{\boldsymbol{g}}_t^d - {\boldsymbol{g}}_{t - 1}^d} \right\|_2^2} +\frac{\gamma }{2}\displaystyle \sum\limits_{d = 1}^D {\left\| {\Delta {\boldsymbol{R}}_t^d - \Delta {\boldsymbol{R}}_{t - 1}^d} \right\|_2^2} . \end{split} $

把式(6)带入式(9),并根据帕斯瓦尔定理,将目标函数由时域变换到频域,改写式(9)为

$ \left.\begin{split} E\left( {{{f}_t},{{{\boldsymbol{\hat g}}}_t}} \right) =& \dfrac{1}{2}\sum\limits_{d = 1}^D {\left( {\frac{1}{H}\left\| {{\boldsymbol{\hat y}} - {\boldsymbol{\hat x}}_t^d \odot {\boldsymbol{\hat g}}_k^d} \right\|_2^2+\lambda \left\| {{\boldsymbol{f}}_t^d} \right\|_2^2} \right.} {\text+} \\ & \alpha \left\| {{\boldsymbol{\hat g}}_t^d - {\boldsymbol{\hat g}}_{t - 1}^d} \right\|_{\text{2}}^{\text{2}}{\text+}\dfrac{\gamma }{H} \left\| {{\boldsymbol{\hat x}}_t^d \odot {\boldsymbol{\hat g}}_k^d - }\right.\\ &\left.2\widehat {M\left( {{\boldsymbol{R}}_{t - 1}^d} \right)}+{\left.\widehat {M\left( {{\boldsymbol{R}}_{t-2}^d} \right)} \right\|_2^2} \right),\\ {\text{s}}.{\text{t}}.{\text{ }}{\boldsymbol{\hat g}}_k^d =& \sqrt H {\boldsymbol{F}}{{\boldsymbol{P}}^{\text{T}}}{\boldsymbol{f}}_t^d. \end{split} \right\}$

利用增广拉格朗日乘子法,重写式(10)为

$ \begin{split} &L\left( {{{f}_t},{{{\boldsymbol{\hat g}}}_t},{\boldsymbol{\hat \zeta }}} \right)= E\left( {{{f}_t},{{{\boldsymbol{\hat g}}}_t}} \right)+\frac{\mu }{2}\sum\limits_{d = 1}^D {\left\| {{\boldsymbol{\hat g}}_t^d - \sqrt H {\boldsymbol{F}}{{\boldsymbol{P}}^{\text{T}}}{\boldsymbol{f}}_t^d+\frac{{{\boldsymbol{\hat \zeta }}_t^d}}{\mu }} \right\|} _2^2. \end{split} $

式中: $ \mu $表示正则化因子; $ {\boldsymbol{\hat \zeta }} $为拉格朗日乘子,其中变量 $ {\boldsymbol{f}}_t^* $$ {\boldsymbol{g}}_t^* $通过交替方向乘子法(alternating direction method of multipliers, ADMM)分解为2个子问题进行求解.

子问题1:求解 $ {\boldsymbol{f}}_t^* $.

$ \begin{split} {\boldsymbol{f}}_t^{d*} = &\mathop {\arg \min }\limits_{{{f}_t}} \left( {\frac{1}{2}} \right.\sum\limits_{d = 1}^D {\left( {\lambda \left\| {{\boldsymbol{f}}_t^d} \right\|_2^2 + \mu \sum\limits_{d = 1}^D {\left\| {\hat {\boldsymbol{g}}_t^d - \sqrt {H} {\boldsymbol{F}}{{\boldsymbol{P}}^{\rm{T}}}{\boldsymbol{f}}_t^d} \right.} } \right.} + \\ &\left. {\left. {\left. {\frac{{\hat {{\boldsymbol{\zeta}} }_t^d}}{\mu }} \right\|_2^2} \right)} \right){\rm{ = }}\frac{{H\left( {\mu {\boldsymbol{g}}_t^d +{\boldsymbol{ \zeta}} _t^d} \right)}}{{\lambda + \mu H}}.\\[-10pt] \end{split}$

$ \left. {\begin{split}&{{\boldsymbol{g}}}_{t}^{d}=\dfrac{1}{\sqrt{H}}{\boldsymbol{P}}{{\boldsymbol{F}}}^{\text{T}}{{\hat {\boldsymbol{g}}}}_{t}^{d}\text{,}\\& {\boldsymbol{\hat \zeta }} _{t}^{d}=\dfrac{1}{\sqrt{H}}{\boldsymbol{P}}{{\boldsymbol{F}}}^{\text{T}}{\boldsymbol{\hat \zeta }} _{t}^{d}.\end{split}} \right\} $

子问题2:求解 $ {\boldsymbol{g}}_t^* $

$ \begin{split} {\boldsymbol{\hat g}}_t^* = & \mathop {\arg \min }\limits_{{{{\hat {\boldsymbol{g}}}}_t}} \left( {\dfrac{1}{2}\left( {\dfrac{1}{H}\left\| {{\boldsymbol{\hat y}} - {\boldsymbol{\hat x}}_t^d \odot {\boldsymbol{\hat g}}_t^d} \right\|_2^2} \right.} \right.{\text+} \alpha \left\| {{\boldsymbol{\hat g}}_t^d - {\boldsymbol{\hat g}}_{t - 1}^d} \right\|_2^2{\text+} \\ & \dfrac{\gamma }{H}\left\| {{\boldsymbol{\hat x}}_t^d \odot {\boldsymbol{\hat g}}_t^d - 2\widehat {M\left( {{\boldsymbol{R}}_{t - 1}^d} \right)}+\widehat {M\left( {{\boldsymbol{R}}_{t - 2}^d} \right)}} \right\|_2^2{\text+} \\ & \left. {\left. {\mu \displaystyle \sum\limits_{d = 1}^D {\left\| {{\boldsymbol{\hat g}}_t^d - \sqrt H {\boldsymbol{F}}{{\boldsymbol{P}}^{\text{T}}}{\boldsymbol{f}}_t^d+\frac{{{\boldsymbol{\hat \zeta }}_t^d}}{\mu }} \right\|} _2^2} \right)} \right). \end{split} $

$ {\boldsymbol{\hat y}} $仅依赖 ${\boldsymbol{\hat x}}\left( n \right) \;= \;\left[ {{\boldsymbol{\hat x}}_t^1\left( n \right),\;{\boldsymbol{\hat x}}_t^2\left( n \right),\; \cdots ,\;} \right.$ $ \;{\left. {{\boldsymbol{\hat x}}_t^D\left( n \right)} \right]^{\text{T}}} $${\boldsymbol{\hat g}}\left( n \right) \;= \left[ {{\rm{conj}}\left( {{{{\boldsymbol{\hat g}}}}_t^1\left( n \right)} \right),} \right.{{\rm{conj}}}\left( {{{{\boldsymbol{\hat g}}}}_{\rm{t^2}}} \right.\left. {\left( n \right)} \right)$ $, \cdots ,{\left. {{\rm{conj}}\left( {{{{\boldsymbol{\hat {g}}}}}_t^D\left( n \right)} \right)} \right]^{\text{T}}}$,其中 $ {\rm{conj}}(·) $表示复共轭计算,因此将式(14)分解为 $ H $个子问题进行求解,

$ \begin{split} {\boldsymbol{\hat g}}_t^*\left( n \right) =& \mathop {\arg \min }\limits_{{{{\hat {\boldsymbol{g}}}}_t}} \left( {\dfrac{1}{2}} \right.\left( {\frac{1}{H}\left\| {{\boldsymbol{\hat y}}\left( n \right) - {\boldsymbol{\hat x}}_t^{\text{T}}\left( n \right) \odot {{{\boldsymbol{\hat g}}}_t}\left( n \right)} \right\|_2^2} \right.{\text+} \\ & \dfrac{\gamma }{H} \left\| {{\boldsymbol{\hat x}}_t^T\left( n \right) \odot {{{\boldsymbol{\hat g}}}_t}\left( n \right) - 2\widehat {M\left( {{\boldsymbol{R}}_{t - 1}^{}} \right)\left( n \right)}} \right. {\text+} \\ & \left. {\widehat {M \left( {{\boldsymbol{R}} _{t - 2}^{}} \right)\left( n \right)}} \right\|_2^2 {\text+} \alpha \left\| {{{{\boldsymbol{\hat g}}}_t}\left( n \right) - {{{\boldsymbol{\hat g}}}_{t - 1}}\left( n \right)} \right\|_2^2 + \\&\mu\left. {\left. {\left\| {{{{\boldsymbol{\hat g}}}_t}\left( n \right) - \sqrt H {\boldsymbol{F}}{{\boldsymbol{P}}^{\rm{T}}}{{{f}}_t}\left( n \right) + \frac{{{{{\boldsymbol{\hat \zeta }}}_t}\left( n \right)}}{\mu }} \right\|_2^2} \right)} \right) = \\ & c {\left( {{{{\boldsymbol{\hat x}}}_t}\left( n \right){\boldsymbol{\hat x}}_t^{\rm{T}}\left( n \right) + \left( {\mu + \alpha } \right)cH{\boldsymbol{I}}} \right)^{ - 1}}\times \left( {{{{\boldsymbol{\hat x}}}_t}\left( n \right){\boldsymbol{\hat y}}\left( n \right)} \right. {\text+} \\ & \alpha H{{{\boldsymbol{\hat g}}}_{t - 1}}\left( n \right)+2\gamma {{\boldsymbol{\hat x}}_t}\left( n \right)\widehat {M\left( {{\boldsymbol{R}}_{t - 1}^{}} \right)\left( n \right)} -\\ & \gamma {{\boldsymbol{\hat x}}_t}\left( n \right)\widehat {M\left( {{\boldsymbol{R}}_{t - 2}^{}} \right)\left( n \right)} - H{\boldsymbol{\hat \zeta }}\left( n \right) +\mu H{{{\hat f}}_t}\left( n \right). \end{split} $

式中: $ c = {1 \mathord{\left/ {\vphantom {1 {\left( {1+\gamma } \right)}}} \right. } {\left( {1+\gamma } \right)}} $. 根据Sherman-Morrison定理:

${\left( {{\boldsymbol{B}}+{\boldsymbol{u}}{{\boldsymbol{v}}^{\text{T}}}} \right)^{ - 1}} = {{\boldsymbol{B}}^{ - 1}} - {{\boldsymbol{B}}^{ - 1}}{\boldsymbol{u}}{\left( {{\boldsymbol{I}}+{{\boldsymbol{v}}^{\text{T}}}{{\boldsymbol{B}}^{ - 1}}{\boldsymbol{u}}} \right)^{ - 1}}{{\boldsymbol{v}}^{\text{T}}}{{\boldsymbol{B}}^{ - 1}} ,$将式(15)化简为

$ \begin{split} &{\boldsymbol{\hat g}}_t^*(n)= \dfrac{1}{{\left( {\mu +\alpha } \right)H}}\left( {{\boldsymbol{I}} - \dfrac{{{{{\boldsymbol{\hat x}}}_t}\left( n \right){\boldsymbol{\hat x}}_t^{\text{T}}\left( n \right)}}{{\left( {\mu +\alpha } \right)cH+{\boldsymbol{\hat x}}_t^{\text{T}}\left( n \right){{{\boldsymbol{\hat x}}}_t}\left( n \right)}}} \right) \times \\ & \left( {{{{\boldsymbol{\hat x}}}_t}\left( n \right){{{\boldsymbol{\hat y}}}_t}\left( n \right) + 2\gamma {{{\boldsymbol{\hat x}}}_t}\left( n \right)\widehat {M\left( {{{\boldsymbol{R}}_{t - 1}}} \right)\left( n \right)}} \right. - \gamma {{{\boldsymbol{\hat x}}}_t}\left( n \right)\widehat {M\left( {{{\boldsymbol{R}}_{t - 2}}} \right)\left( n \right)}+\\ &\alpha H{{{\boldsymbol{\hat g}}}_{t - 1}}\left( n \right) - \left. {H{\boldsymbol{\hat \zeta }}\left( n \right)+\mu H{{{{\hat f}}}_t}\left( n \right)} \right). \end{split} $

更新拉格朗日乘子 $ {{\boldsymbol{\hat \zeta }}_t} $的计算式为

$ {\boldsymbol{\hat \zeta }}_t^{\left( {i+1} \right)} = {\boldsymbol{\hat \zeta }}_t^{\left( i \right)}+\mu \left( {{\boldsymbol{\hat g}}_t^{*\left( {i+1} \right)} - {\boldsymbol{\hat f}}_t^{*\left( {i+1} \right)}} \right), $

$ {\mu ^{\left( {i+1} \right)}} = \min \left( {{\mu _{\max }},\delta {\mu ^{\left( i \right)}}} \right). $

式中: ${{\hat f}}_t^{\left( {i+1} \right)}$$ {\boldsymbol{\hat g}}_t^{\left( {i+1} \right)} $为求解 ${{\hat f}}_t^{}$$ {\boldsymbol{\hat g}}_t^{} $中ADMM迭代 $ \left( {i+1} \right) $次的结果, $ \delta $为更新步长.

2.4. 模型更新

更新目标外观的计算式为

$ {\boldsymbol{\hat x}}_t^d = \left( {1 - \eta } \right){\boldsymbol{\hat x}}_{t - 1}^d+\eta {\boldsymbol{\hat x}}_t^d. $

式中: $ \eta $为更新系数.

3. 实验结果

在Windows操作系统下,采用Matlab软件进行编程,实验平台配置如下。CPU为Inter(R) Xeon(R) E5-2680 V4,主频为2.4 GHz,内存128 GB. 参数设置: $ \lambda = 0.01 $$ \gamma = 1.1 $$ \alpha {\text{ = 1}}{\text{.1}} $$ \eta = 0.02 $. 在3个流行的无人机跟踪数据库(UAVDT、DTB70、UAV123@10 fps)上进行仿真实验,并将本研究所提算法(RDAR)与主流的9种相关滤波跟踪算法进行对比. 这9种算法分别为ECO-HC[15],MCCT_H[16]、BACF[10]、fDSST[17]、SAMF[18]、SAMF_CA[19]、SRDCF[20]、SRDCFdecon[21]、Staple[22].

3.1. 定量对比

从跟踪准确度和成功率2个方面对比不同算法的性能. 跟踪准确度用定位误差来定量评估,定位误差为算法计算出的目标中心点位置与人工标注的标准中心点位置间的欧式距离差. 跟踪准确度P衡量视频跟踪结果中定位误差小于20个像素的比率. 跟踪成功率用覆盖率来定量评估,覆盖率为人工标注的标准图像框和算法得到的目标图像框之间交集和并集的比. 跟踪成功率Rs衡量跟踪结果中覆盖率大于0.5的比率.

表1所示为10种跟踪算法在3个数据库上的跟踪准确度和成功率对比. 可以看出,ECO-HC虽然在UAV123@10 fps、DTB70中成功率指标得分在所有算法中第2高,但是在UAVDT中表现一般,成功率指标的得分仅为第4高. RDAR在3个数据库中的2个指标得分均第1高,表明本研究算法具有较好的泛化性,能够适应不同的无人机数据库,解决不同场景的跟踪问题.

表 1   10种跟踪算法在3个无人机数据库上的准确度和成功率对比

Tab.1  Comparison in terms of precision and success rates for ten tracking algorithms on three UAV datasets

算法 P Rs
DTB70 UAV123@10 fps UAVDT DTB70 UAV123@10 fps UAVDT
RDAR 0.671 0.672 0.724 0.462 0.488 0.458
ECO-HC 0.643 0.634 0.681 0.453 0.462 0.410
MCCT_H 0.604 0.596 0.667 0.405 0.433 0.402
BACF 0.590 0.572 0.686 0.402 0.413 0.433
fDSST 0.534 0.516 0.666 0.357 0.379 0.383
SAMF_CA 0.532 0.523 0.564 0.346 0.365 0.304
SAMF 0.519 0.466 0.579 0.340 0.326 0.312
SRDCF 0.512 0.575 0.658 0.363 0.423 0.419
SRDCFdecon 0.504 0.584 0.643 0.351 0.429 0.410
Staple 0.365 0.456 0.665 0.265 0.342 0.383

新窗口打开| 下载CSV


表2~7所示分别为5种较好跟踪算法在3个数据库上不同属性的跟踪准确度和跟踪成功率指标对比. 无人机视频通常具有以下挑战性属性:宽高比变化(aspect ratio variation, ARV)、背景杂乱(background clutter, BC)、形变(deformation, DEF)、相机快速运动(fast camera motion, FCM)、平面内旋转(in-plane rotation, IPR)、运动模糊(motion blur, MB)、遮挡(occlusion,OCC)、平面外旋转(out-of-plane rotation, OPR)、视野外(out-of-view, OV)、尺度变化(scale variation, SV)、相似物体干扰(similar objects around, SOA)、完全遮挡(full_occlusion, FOC)、部分遮挡(partial_occlusion, POC)、快速运动(fast_motion, FM)、光照变化(illumination_variation, IV)、视角变化(viewpoint change, VC)、长程跟踪(long-term tracking, LTT). DTB70有其中的11种属性,UAV123@10 fps有其中的12种属性,UAVDT有其中的9种属性. 可以看出,在3个数据库上几乎所有的干扰属性中,无论是跟踪准确度还是跟踪成功率,RDAR均有较好表现.

表 2   5种跟踪算法在DTB70数据库上不同属性的跟踪准确度对比

Tab.2  Comparison in terms of precision of five tracking algorithms for different attributes on DTB70 dataset

算法 P
ARV BC DEF FCM IPR MB OCC OPR OV SV SOA
RDAR 0.600 0.589 0.656 0.705 0.613 0.675 0.582 0.434 0.628 0.661 0.700
BACF 0.392 0.545 0.448 0.636 0.547 0.639 0.515 0.266 0.650 0.533 0.624
MCCT_H 0.495 0.484 0.550 0.621 0.552 0.502 0.571 0.383 0.573 0.643 0.606
ECO-HC 0.506 0.567 0.584 0.680 0.568 0.640 0.641 0.430 0.557 0.530 0.667
SRDCFdecon 0.343 0.449 0.283 0.574 0.430 0.500 0.456 0.193 0.570 0.473 0.564

新窗口打开| 下载CSV


表 3   5种跟踪算法在UAV123@10 pfs数据库上不同属性的跟踪准确度对比

Tab.3  Comparison in terms of precision of five tracking algorithms for different attributes on UAV123@10 pfs dataset

算法 P
ARC BC CM FM FOC IV LR OV POC SV SOB VC
RDAR 0.587 0.469 0.635 0.531 0.464 0.568 0.568 0.547 0.613 0.630 0.678 0.591
BACF 0.478 0.425 0.532 0.407 0.336 0.430 0.431 0.421 0.467 0.525 0.605 0.491
MCCT_H 0.493 0.469 0.544 0.361 0.421 0.477 0.455 0.493 0.542 0.547 0.627 0.484
ECO-HC 0.558 0.511 0.609 0.487 0.454 0.507 0.527 0.522 0.556 0.587 0.637 0.548
SRDCFdecon 0.476 0.427 0.536 0.403 0.427 0.423 0.436 0.483 0.514 0.535 0.621 0.478

新窗口打开| 下载CSV


表 4   5种跟踪算法在UAVDT数据库上不同属性的跟踪准确度对比

Tab.4  Comparison in terms of precision of five tracking algorithms for different attributes on UAVDT dataset

算法 P
BC CM IV LO LTT OB OM SV SO
RDAR 0.632 0.687 0.761 0.549 0.877 0.720 0.641 0.647 0.808
BACF 0.599 0.614 0.739 0.488 0.886 0.699 0.604 0.604 0.770
MCCT_H 0.571 0.622 0.703 0.482 0.925 0.667 0.561 0.594 0.796
ECO-HC 0.607 0.647 0.723 0.504 0.924 0.669 0.596 0.607 0.767
SRDCFdecon 0.533 0.588 0.690 0.433 0.812 0.650 0.560 0.565 0.716

新窗口打开| 下载CSV


表 5   5种跟踪算法在DTB70数据库上不同属性的跟踪成功率对比

Tab.5  Comparison in terms of success rate of five tracking algorithms for different attributes on DTB70 dataset

算法 Rs
ARV BC DEF FCM IPR MB OCC OPR OV SV SOA
RDAR 0.396 0.379 0.434 0.483 0.420 0.458 0.413 0.318 0.407 0.465 0.467
BACF 0.273 0.337 0.302 0.436 0.371 0.412 0.348 0.203 0.419 0.392 0.411
MCCT_H 0.334 0.296 0.354 0.410 0.376 0.334 0.377 0.243 0.349 0.439 0.399
ECO-HC 0.376 0.349 0.404 0.469 0.410 0.434 0.431 0.319 0.416 0.430 0.446
SRDCFdecon 0.250 0.285 0.196 0.398 0.311 0.331 0.308 0.150 0.384 0.351 0.369

新窗口打开| 下载CSV


表 6   5种跟踪算法在UAV123@10 pfs数据库上不同属性的跟踪成功率对比

Tab.6  Comparison in terms of success rate of five tracking algorithms for different attributes on UAV123@10 pfs dataset

算法 Rs
ARC BC CM FM FOC IV LR OV POC SV SOB VC
RDAR 0.412 0.302 0.464 0.355 0.257 0.391 0.342 0.404 0.427 0.453 0.478 0.422
BACF 0.334 0.275 0.397 0.275 0.173 0.310 0.248 0.321 0.327 0.374 0.424 0.353
MCCT_H 0.357 0.305 0.407 0.260 0.236 0.342 0.257 0.365 0.376 0.396 0.451 0.361
ECO-HC 0.392 0.339 0.449 0.332 0.247 0.362 0.299 0.387 0.391 0.424 0.464 0.400
SRDCFdecon 0.344 0.293 0.399 0.276 0.231 0.314 0.247 0.356 0.361 0.390 0.440 0.360

新窗口打开| 下载CSV


3.2. 定性对比

将RDAR与MCCT_H、ECO-HC、BACF、SRDCFdecon在3个数据库中进行仿真实验,结果如图1所示. 图中,视频S0601、S0801来自UAVDT数据库,视频gull1、basketball来自DTB70数据库,图中,视频truck2、group2_2来自UAV123@10 fps数据库.

图 1

图 1   不同算法在DTB70、UAV123@10 fps、UAVDT数据库上代表性跟踪结果的对比

Fig.1   Comparison of representative tracking results by different algorithms on DTB70、UAV123@10 fps、UAVDT


表 7   5种跟踪算法在UAVDT数据库上不同属性的跟踪成功率对比

Tab.7  Comparison in terms of success rate of five tracking algorithms for different attributes on UAVDT dataset

算法 Rs
BC CM IV LO LTT OB OM SV SO
RDAR 0.399 0.435 0.465 0.386 0.577 0.442 0.397 0.435 0.464
BACF 0.367 0.387 0.460 0.340 0.582 0.443 0.371 0.408 0.428
MCCT_H 0.343 0.367 0.415 0.348 0.565 0.390 0.343 0.384 0.389
ECO-HC 0.364 0.379 0.434 0.348 0.573 0.391 0.358 0.390 0.375
SRDCFdecon 0.339 0.374 0.430 0.322 0.515 0.395 0.351 0.389 0.410

新窗口打开| 下载CSV


S0601序列为无人机以低空视角拍摄的汽车在十字路口等待红灯的视频,当无人机的高度逐渐升高,从第25帧到第40帧,目标汽车被红绿灯指示牌完全遮挡,且周围存在相似汽车的干扰. 从跟踪结果看,由于RDAR引入帧间滤波器偏差约束机制,能够有效解决跟踪过程中出现的遮挡问题,实现准确的跟踪;对比算法均产生漂移,跟丢目标.

S0801序列为无人机以低空视角拍摄的汽车在路口转弯的视频,存在背景杂乱、形变和部分遮挡等严重干扰属性. 可以看出,背景干扰使ECO-HC、BACF和SRDCFdecon算法的跟踪框几乎不动,跟丢目标汽车. 当目标汽车转出路口后,无人机拍摄的部位由汽车侧面变为尾部时,ECO-HC不能适应目标的外观变化,丢失目标. RDAR通过响应偏差约束机制,能够有效感知目标的外观变化,始终准确定位目标汽车位置.

gull1序列为无人机以低空视角拍摄的海鸥在海岸边飞行的视频. 视频中海鸥以较快的速度飞行,海鸥身体颜色和海水颜色接近,使跟踪算法较难区分目标和背景. 对比算法均不能始终定位海鸥,较早地跟丢目标,只有RDAR能够实现准确跟踪,处理相似背景干扰的性能比较稳定.

basketball序列为无人机以较远距离拍摄的目标人物在操场上运动的视频. 当无人机高度逐渐升高时,视角发生较大变化,操场上存在其他运动员的干扰. RDAR和MCCT_H算法能够适应摄像机视角变化、相似小物体的干扰,跟踪性能相对比较稳定.

truck2序列为无人机以远距离高空视角拍摄的行进卡车的视频. 可以看出,卡车颜色和周围大地背景色接近,难以区分,同时存在卡车被路边建筑遮挡情况. 从跟踪结果来看,对比算法均不能较好处理同时存在背景杂乱、遮挡以及小目标等属性引起的干扰,跟丢目标.

group2_2序列为无人机以高空视角拍摄的目标人物在沙滩上奔跑的视频,无人机随着目标人物移动而改变姿态. 视频中目标人物衣服颜色和沙滩较为接近,并多次改变奔跑姿态,导致跟踪困难. 可以看出,对比算法均不能适应目标人物的外观变化,跟丢目标;RDAR能够始终实现准确定位,保持较高的跟踪性能.

3.3. 跟踪速度对比

表8所示为不同算法在DTB70数据库上的跟踪速度对比. 所有算法都在相同的实验平台进行仿真,RDAR的跟踪速度v=39.0帧/s,虽然跟踪速度在对比算法中不是最优的,但是结合跟踪准确度和跟踪成功率,RDAR相比其他算法具有一定的竞争优势.

表 8   不同算法在DTB70数据库中的跟踪速度对比

Tab.8  Comparison of tracking speed for different tracking algorithms on DTB70 dataset

算法 v/(帧·s−1) 算法 v/(帧·s−1)
RDAR 39.0 SAMF_CA 9.1
ECO-HC 62.2 SAMF 10.0
MCCT_H 59.0 SRDCF 10.7
BACF 46.5 SRDCFdecon 6.0
fDSST 132.0 Staple 62.5

新窗口打开| 下载CSV


3.4. 关键参数选取分析

为了验证不同约束参数 $ \gamma $$ \alpha $对跟踪性能的影响,如图2所示为在UAVDT数据库中 $ \gamma $$ \alpha $导致的跟踪准确度和跟踪成功率变化. $ \gamma $$ \alpha $∈[0.7, 1.3],增量为0.1. 可以看出,当 $ \gamma $=1.1时,跟踪准确度达到最大值;当 $ \gamma $=1.1、1.2时,跟踪成功率相同. 综合2个指标,RDAR取 $ \gamma $=1.1. 当 $ \alpha $=1.0时,跟踪准确度达到峰值; $ \alpha $=0.8、0.9、1.0时,跟踪成功率相同且为峰值. 综合2个指标,RDAR取 $ \alpha $=1.0.

图 2

图 2   不同关键参数取值的UAVDT数据库跟踪性能对比

Fig.2   Comparison of tracking performance on UAVDT with different parameters


3.5. 消融实验

表9给出基准算法BACF在响应偏差模块(response deviation, RD)、滤波器偏差模块(filter deviation, FD)作用下,分别在3个数据库中的跟踪准确度和成功率对比. 可以看出,BACF的跟踪准确度、跟踪成功率均最低;当模块RD、FD分别工作时,与BACF相比,跟踪性能均有一定程度的改善;RDAR的跟踪准确度、跟踪成功率均最高,表明模块RD、FD同时使用时,对无人机对地目标跟踪性能有较大提升.

表 9   3个无人机数据库上不同模块跟踪性能对比

Tab.9  Comparison of tracking performance with different modules on three UAV datasets

模块 P Rs
DTB70 UAV123@10 fps UAVDT DTB70 UAV123@10 fps UAVDT
BACF 0.590 0.572 0.686 0.402 0.413 0.433
BACF+RD 0.648 0.663 0.713 0.451 0.484 0.457
BACF+FD 0.667 0.649 0.671 0.458 0.470 0.453
RDAR(BACF+RD+CF) 0.671 0.672 0.724 0.462 0.488 0.458

新窗口打开| 下载CSV


4. 结 论

(1) 结合响应偏差能够反映跟踪目标外观变化,构建帧间响应偏差模型,能够有效学习目标的外观变化,提升算法的跟踪性能.

(2) 根据帧间滤波器偏差变化的一致性,在目标函数中引入帧间偏差约束机制,学习鲁棒的相关滤波器,实现目标的准确跟踪.

(3)在UAVDT、DTB70、UAV123@10 fps 3个无人机数据库上的实验结果表明,本研究所提算法能够较好地学习目标的外观变化,在多个挑战性的场景下具有较高的跟踪精度和鲁棒性.

(4)本研究所提算法在短时无人机视频中取得了较好的跟踪效果,但是在长时无人机场景下,跟踪性能有待进一步提高. 后续将结合深度学习特征,开发具有轻量级网络结构的相关滤波跟踪算法,以满足长时无人机跟踪的实际工程需求.

参考文献

刘芳, 杨安喆, 吴志威

基于自适应Siamese网络的无人机目标跟踪算法

[J]. 航空学报, 2020, 41 (1): 323423

[本文引用: 1]

LIU Fang, YANG An-zhe, WU Zhi-wei

Adaptive Siamese network based UAV target tracking algorithm

[J]. Acta Aeronautica et Astronautica Sinica, 2020, 41 (1): 323423

[本文引用: 1]

孙锐, 方林凤, 梁启丽, 等

孪生网络框架下融合显著性和干扰在线学习的航拍目标跟踪算法

[J]. 电子与信息学报, 2021, 43 (5): 1414- 1423

DOI:10.11999/JEIT200140     

SUN Rui, FANG Lin-feng, LIANG Qi-li, et al

Siamese network combined learning saliency and online leaning interference for aerial object tracking algorithm

[J]. Journal of Electronics and Information Technology, 2021, 43 (5): 1414- 1423

DOI:10.11999/JEIT200140     

赵燕伟, 张健, 周仙明, 等

基于视觉−磁引导的无人机动态跟踪与精准着陆

[J]. 浙江大学学报:工学版, 2021, 55 (1): 96- 108

ZHAO Yan-wei, ZHANG Jian, ZHOU Xian-ming, et al

Dynamic tracking and precise landing of UAV based on visual magnetic guidance

[J]. Journal of Zhejiang University: Engineering Science, 2021, 55 (1): 96- 108

刘芳, 王洪娟, 黄光伟, 等

基于自适应深度网络的无人机目标跟踪算法

[J]. 航空学报, 2019, 40 (3): 322332

[本文引用: 1]

LIU Fang, WANG Hong-juan, HUANG Guang-wei, et al

UAV target tracking algorithm based on adaptive depth network

[J]. Acta Aeronautica et Astronautica Sinica, 2019, 40 (3): 322332

[本文引用: 1]

FU C H, HE Y J, LIN F L, et al

Robust multi-kernelized correlators for UAV tracking with adaptive context analysis and dynamic weighted filters

[J]. Neural Computing and Applications, 2020, 32: 12591- 12607

DOI:10.1007/s00521-020-04716-x      [本文引用: 1]

LI Y M, FU C H, HUANG Z Y, et al

Intermittent contextual learning for keyfilter-aware UAV object tracking using deep convolutional feature

[J]. IEEE Transactions on Multimedia, 2021, 23: 810- 822

DOI:10.1109/TMM.2020.2990064     

HE Y J, FU C H, LIN F L, et al. Towards robust visual tracking for unmanned aerial vehicle with tri-attentional correlation filters [C]// 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Las Vegas: IEEE, 2020: 1575-1582.

[本文引用: 1]

BOLME D S, BEVERIDGE J R, DRAPER B A, et al. Visual object tracking using adaptive correlation filters [C]// 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Francisco: IEEE, 2010: 2544-2550.

[本文引用: 1]

HENRIQUES J F, CASEIRO R, MARTINS P, et al

High-speed tracking with kernelized correlation filters

[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37 (3): 583- 596

DOI:10.1109/TPAMI.2014.2345390      [本文引用: 1]

GALOOGAHI H K, FAGG A, LUCEY S. Learning background-aware correlation filters for visual tracking [C]// 2017 IEEE International Conference on Computer Vision (ICCV). Venice: IEEE, 2017: 1144-1152.

[本文引用: 5]

LI F, TIAN C, ZUO W M, et al. Learning spatial-temporal regularized correlation filters for visual tracking [C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (ICCV). Salt Lake City: IEEE, 2018: 4904-4913.

[本文引用: 2]

DAI K, WANG D, LU H C, et al. Visual tracking via adaptive spatially-regularized correlation filters [C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach: IEEE, 2019: 4665-4674.

[本文引用: 1]

MA C, HUANG J B, YANG X K, et al

Robust visual tracking via hierarchical convolutional features

[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41 (11): 2709- 2723

DOI:10.1109/TPAMI.2018.2865311      [本文引用: 1]

FU C H, YE J J, XU J T, et al

Disruptor-aware interval-based response inconsistency for correlation filters in real-time aerial tracking

[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59 (8): 6301- 6313

DOI:10.1109/TGRS.2020.3030265      [本文引用: 1]

DANELLJAN M, BHAT G, KHAN F S, et al. ECO: efficient convolution operators for tracking [C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu: IEEE, 2017: 6931-6939.

[本文引用: 1]

WANG N, ZHOU W G, TIAN Q, et al. Multi-cue correlation filters for robust visual tracking [C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 4844-4853.

[本文引用: 1]

DANELLJAN M, HÄGER G, KHAN F S, et al. Discriminative scale space tracking [J] IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(8): 1561-1575.

[本文引用: 1]

LI Y, ZHU J. A scale adaptive kernel correlation filter tracker with feature integration [C]// Computer Vision: ECCV 2014 Workshops. [S. l.]: Springer, 2015: 254-265.

[本文引用: 1]

MUELLER M, SMITH N, GHANEM B. Context-aware correlation filter tracking [C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu: IEEE, 2017: 1387-1395.

[本文引用: 1]

DANELLJAN M, HÄGER G, KHAN F S, et al. Learning spatially regularized correlation filters for visual tracking [C]// 2015 IEEE International Conference on Computer Vision (ICCV). Santiago: IEEE, 2015: 4310-4318.

[本文引用: 1]

DANELLJAN M, HÄGER G, KHAN F S, et al. Adaptive decontamination of the training set: a unified formulation for discriminative visual tracking [C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas: IEEE, 2016: 1430-1438.

[本文引用: 1]

BERTINETTO L, VALMADRE J, GOLODETZ S, et al. Staple: complementary learners for real-time tracking [C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas: IEEE, 2016: 1401-1409.

[本文引用: 1]

/