首页 > 论文 > 激光与光电子学进展 > 56卷 > 1期(pp:11502--1)

基于全卷积对称网络的目标尺度自适应追踪

Object Scale Adaptation Tracking Based on Full-Convolutional Siamese Networks

  • 摘要
  • 论文信息
  • 参考文献
  • 被引情况
  • PDF全文
分享:

摘要

针对目标追踪过程中由于目标快速运动及尺度变化导致追踪失败的问题, 提出了一种基于全卷积对称网络的目标尺度自适应追踪算法。首先利用MatConvNet框架构建全卷积对称网络, 使用训练好的网络得到实验图像与模板的多维特征图, 两者通过互相关操作, 选取置信分数最大的点为所追踪目标的中心位置; 其次, 对中心位置进行多尺度采样, 将小于模板方差1/2的错误样本过滤掉; 建立目标模板和样本概率直方图, 计算模板与样本间的海林洛距离, 选取合适的尺度作为目标追踪窗口的尺度。在OTB-13数据集上进行实验, 与其他追踪算法性能比较, 本文算法追踪成功率为0.832, 精度为0.899, 高于同类型深度学习追踪算法, 平均追踪速度达到42.3 frame/s, 满足实时性的需求; 挑选包含目标快速运动或尺度变化属性的追踪序列进一步进行测试, 本文算法追踪性能仍高于其他算法。

Abstract

Aiming at the problem of tracking failure due to fast motion and scale variation during object tracking, an object scale adaptation tracking based on full-convolutional siamese networks is proposed. First, a full-convolutional symmetric network is constructed using MatConvNet framework, and the multidimensional feature maps of template images and experimental images are obtained by using the trained networks. Through the cross-correlation operation, the point with the highest confidence score is selected as the center of the tracked target. Then, through multi-scale sampling at the center, the error samples that are less than half the template variance are filtered out. The probability histograms of target templates and samples are built. The Hellinger distance between the template and the samples is calculated, and the appropriate scale is selected as the scale of the target tracking window. Experiments on the OTB-13 dataset are carried out. Compared with other tracking algorithms, the tracking success rate of proposed method is 0.832, and the precision is 0.899, which are higher than that of other algorithms, and the average tracking speed is achieved 42.3 frame/s, meeting the needs of real-time object tracking. Selecting the tracking sequences with fast motion or scale change attributes for further testing, the tracking performance of proposed method is still higher than other algorithms.

Newport宣传-MKS新实验室计划
补充资料

中图分类号:TP391.4

DOI:10.3788/lop56.011502

所属栏目:机器视觉

收稿日期:2018-06-07

修改稿日期:2018-07-06

网络出版日期:2018-07-18

作者单位    点击查看

孙晓霞:华北电力大学计算机系, 河北 保定 071000
庞春江:华北电力大学计算机系, 河北 保定 071000

联系人作者:孙晓霞(949625607@qq.com)

【1】Lu H C, Li P X, Wang D. Visualobject tracking: a survey[J]. Pattern Recognition and Artificial Intelligence, 2018, 31(1): 61-76.
卢湖川, 李佩霞, 王栋. 目标跟踪算法综述[J]. 模式识别与人工智能, 2018, 31(1): 61-76.

【2】Ma C, Yang X K, Zhang C Y, et al. Learning a temporally invariant representation for visual tracking[C]∥IEEE International Conference on Image Processing (ICIP), 2015: 857-861.

【3】Danelljan M,Hger G, Khan F S, et al. Convolutional features for correlation filter based visual tracking[C]∥IEEE International Conference on Computer Vision Workshop (ICCVW), 2015: 621-629.

【4】Wang L J, Wanli O Y, Wang X G, et al. Visual tracking with fully convolutional networks[C]∥IEEE International Conference on Computer Vision (ICCV), 2015: 3119-3127.

【5】Cai Y Z, Yang D D, Mao N, et al. Visual tracking algorithm based on adaptive convolutional features[J]. Acta Optica Sinica, 2017, 37(3): 0315002.
蔡玉柱, 杨德东, 毛宁, 等. 基于自适应卷积特征的目标跟踪算法[J]. 光学学报, 2017, 37(3): 0315002.

【6】Mao N, Yang D D, Yang F C, et al. Adaptive object tracking based on hierarchical convolution features[J]. Laser & Optoelectronics Progress, 2016, 53(12): 121502.
毛宁, 杨德东, 杨福才, 等. 基于分层卷积特征的自适应目标跟踪[J]. 激光与光电子学进展, 2016, 53(12): 121502.

【7】Danelljan M,Hager G, Khan F S, et al. Discriminative scale space tracking[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(8): 1561-1575.

【8】Wang X, Hou Z Q, Yu W S, et al. Target scale adaptive robust tracking based on fusion of multilayer convolutional features[J]. Acta Optica Sinica, 2017, 37(11): 1115005.
王鑫, 侯志强, 余旺盛, 等. 基于多层卷积特征融合的目标尺度自适应稳健跟踪[J]. 光学学报, 2017, 37(11): 1115005.

【9】Wu Y, Lim J, Yang M H. Online object tracking: a benchmark[C]∥IEEE Conference on Computer Vision and Pattern Recognition, 2013: 2411-2418.

【10】Chopra S, Hadsell R, LeCun Y. Learning a similarity metric discriminatively, with application to face verification[C]∥IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR''05), 2005: 539-546.

【11】Zhang M D, Wang Q, Xing J L, et al. Visual tracking via spatially aligned correlation filters network[M]∥Cham: Springer International Publishing, 2018: 484-500.

【12】Bertinetto L, Valmadre J, Henriques J F, et al. Fully-convolutional siamese networks for object tracking[C]∥European Conference on Computer Vision, 2016:850-865.

【13】Valmadre J, Bertinetto L, Henriques J, et al. End-to-end representation learning for correlation filter based tracking[C]∥IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 21-26 July 2017, 2017: 5000-5008.

【14】Henriques J F,Rui C, Martins P, et al. Exploiting the circulant structure of tracking-by-detection with kernels[C]∥European Conference on Computer Vision, 2012:702-715.

【15】Henriques J F, Caseiro R, Martins P, et al. High-speed tracking with kernelized correlation filters[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(3): 583-596.

【16】Bertinetto L, Valmadre J, Golodetz S, et al. Staple: complementary learners for real-time tracking[C]∥IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016: 1401-1409.

【17】Kalal Z, Mikolajczyk K, Matas J. Tracking-learning-detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(7): 1409-1422.

【18】Li S S, Zhao G P, Wang J N. Distractor-aware object tracking based on multi-feature fusion and scale-adaption[J]. Acta Optica Sinica, 2017, 37(5): 0515005.
李双双, 赵高鹏, 王建宇. 基于特征融合和尺度自适应的干扰感知目标跟踪[J]. 光学学报, 2017, 37(5): 0515005.

引用该论文

Sun Xiaoxia,Pang Chunjiang. Object Scale Adaptation Tracking Based on Full-Convolutional Siamese Networks[J]. Laser & Optoelectronics Progress, 2019, 56(1): 011502

孙晓霞,庞春江. 基于全卷积对称网络的目标尺度自适应追踪[J]. 激光与光电子学进展, 2019, 56(1): 011502

您的浏览器不支持PDF插件,请使用最新的(Chrome/Fire Fox等)浏览器.或者您还可以点击此处下载该论文PDF