激光技术, 2022, 46 (2): 239, 网络出版: 2022-03-08  

应用GhostNet卷积特征的ECO目标跟踪算法改进

Improvement of ECO target tracking algorithm based on GhostNet convolution feature
作者单位
哈尔滨师范大学 计算机科学与信息工程学院, 哈尔滨 150025
摘要
为了减少有效卷积算子(ECO)跟踪算法的特征提取网络参数量和计算量, 采用了一种基于端侧神经网络(GhostNet)改进的ECO目标跟踪算法。首先, 采用GhostNet网络作为主干特征提取网络提取图像浅层与深层的卷积特征, 运用全局平均池化对卷积特征下采样增加特征对图像的表征能力; 其次, 将卷积特征与手工特征插值后, 与当前滤波器在傅里叶域进行卷积计算实现目标定位; 最后, 采用共轭梯度算法优化响应误差与惩罚项之和的损失函数实现滤波器更新。在上述提出的算法和OTB2015与VOT2018数据集上进行了理论分析和实验验证, 取得了目标跟踪的对比实验数据。结果表明, 相对于基于ResNet特征提取网络的ECO算法, 该算法在实现高精度跟踪时, 卷积特征提取过程计算量减少了95.75%, 参数量减少了79.69%, 跟踪过程速度提升了160%。这些结果为轻量级目标跟踪算法的研究提供了参考。
Abstract
In order to reduce the amount of feature extraction network parameters and computation of effective convolution operator (ECO) tracking algorithm, the improved eco target tracking algorithm based on GhostNet was adopted. Firstly, the GhostNet network was used as the main feature extraction network to extract the convolution features of shallow and deep layers, and the global average pooling was adapted to downsampling convolution features to improve the image representation ability. Secondly, after interpolating the convolution feature with the manual feature, convolution calculation was performed with the current filter in the Fourier domain to realize the target localization. Finally, conjugate gradient algorithm was used to optimize the loss function of the sum of response error and penalty term to update the filter. Theoretical analysis and experimental verification were carried out on the proposed algorithm and OTB2015 and VOT2018 datasets, then the comparative experimental data of target tracking were obtained. The results show that compared with the ECO algorithm based on ResNet feature extraction network, the proposed algorithm can achieve higher precision tracking, the convolution feature extraction process reduces 95.75% of computation and 79.69% of parameters, and the tracking speed increases 160% at the same time. These results provide a reference for the research of lightweight target tracking algorithms.
参考文献

[1] BRUHN A, WEICKERT J, SCHNRR C. Combining local and glo-bal optic flow methods[J]. International Journal of Computer Vision (IJCV), 2005, 61(3): 211-231.

[2] CRUZ-MOTA J, BOGDANOVA I, PAQUIER B, et al. Scale invariant feature transform on the sphere: Theory and applications[J]. International Journal of Computer Vision (IJCV), 2012, 98(2): 217-241.

[3] MEI X, LING H, WU Y, et al. Efficient minimum error bounded particle resampling L1 tracker with occlusion detection[J]. IEEE Transactions on Image Processing, 2013, 22(7): 2661-2675.

[4] COLLINS J, ROGERS T. Understanding the large-distance behavior of transverse-momentum-dependent parton densities and the Collins-Soper evolution kernel[J]. Physical Review, 2015, D91(7): 074020.

[5] HARE S, GOLODETZ S, SAFFARI A, et al. Struck:structured output tracking with kernels[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2016, 38(10): 2096-2109.

[6] HENRIQUES J F, CASEIRO R, MARTINS P, et al. High-speed tracking with kernelized correlation filters[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence(TPAMI), 2014, 37(3): 583-596.

[7] HENRIQUES J F, CASEIRO R, MARTINS P, et al. Exploiting the circulant structure of tracking-by-detection with kernels[C]//European Conference on Computer Vision (ECCV). Berlin, Heidelberg: Springer, 2012: 702-715.

[8] BOLME D S, BEVERIDGE J R, DRAPER B A, et al. Visual object tracking using adaptive correlation filters[C]//International Confe-rence on Computer Vision and Pattern Recognition (CVPR). New York, USA: IEEE, 2010: 2544-2550.

[9] LI F, TIAN C, ZUO W, et al. Learning spatial-temporal regularized correlation filters for visual tracking[C]// International Conference on Computer Vision and Pattern Recognition(CVPR). New York, USA: IEEE, 2018: 4904-4913.

[10] KALAL Z, MIKOLAJCZYK K, MATAS J. Tracking-learning-detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2011, 34(7): 1409-1422.

[11] DANELLJAN M, HAGER G, KHAN F S, et al. Discriminative scale space tracking[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2016, 39(8): 1561-1575.

[12] DANELLJAN M, HAGER G, SHAHBAZ KHAN F, et al. Convolutional features for correlation filter based visual tracking[C]//Proceedings of the IEEE International Conference on Computer Vision Workshops(ICCV). New York, USA: IEEE, 2015: 58-66.

[13] DANELLJAN M, BHAT G, KHAN F S, et al. ECO: Efficient convolution operators for tracking[C]//Proceedings of the IEEE Confe-rence on Computer Vision and Pattern Recognition (CVPR). New York, USA: IEEE, 2017: 6638-6646.

[14] BERTINETTO L, VALMADRE J, HENRIQUES J F, et al. Fully-convolutional siamese networks for object tracking[C]//European Conference on Computer Vision (ECCV). Cham, Amsterdam, Netherlands: Springer,2016: 850-865.

[15] LI B, YAN J, WU W, et al. High performance visual tracking with siamese region proposal network[C]// International Conference on Computer Vision and Pattern Recognition (CVPR). New York, USA: IEEE, 2018: 8971-8980.

[16] PARK E, BERG A C. Meta-tracker: Fast and robust online adaptation for visual object trackers[C]// European Conference on Computer Vision (ECCV).Springer, Cham, Munich,Germany,2018: 569-585.

[17] WANG G, LUO C, SUN X, et al. Tracking by instance detection: A meta-learning approach[C]// International Conference on Computer Vision and Pattern Recognition (CVPR). New York, USA: IEEE, 2020: 6288-6297.

[18] RAZIYE E, ASKAR H. Infrared dim point target tracking algorithm based on meta learning [J ]. Laser Technology, 2021,45(3):396-404(in Chinese).

[19] VOIGTLAENDER P, LUITEN J, TORR P H S, et al. Siam R-CNN: Visual tracking by re-detection[C]// International Confe-rence on Computer Vision and Pattern Recognition (CVPR). New York, USA : IEEE, 2020: 6578-6588.

[20] HAN K, WANG Y, TIAN Q, et al. GhostNet: More features from cheap operations[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York, USA: IEEE, 2020: 1580-1589.

[21] KUAN K, MANEK G, LIN J, et al. Region average pooling for context-aware object detection[C]//2017 IEEE International Confe-rence on Image Processing (ICIP). New York, USA: IEEE, 2017: 1347-1351.

[22] DANELLJAN M, HAGER G, KHAN F S, et al. Learning spatially regularized correlation filters for visual tracking[C]//Proceedings of the IEEE International Conference on Computer Vision(ICCV). New York, USA: IEEE, 2015: 4310-4318.

[23] BERTINETTO L, VALMADRE J, GOLODETZ S, et al. Staple: Complementary learners for real-time tracking[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR). New York, USA: IEEE, 2016: 1401-1409.

[24] GALOOGAHI H K, FAGG A, LUCEY S. Learning background-aware correlation filters for visual tracking[C]//Proceedings of the IEEE International Conference on Computer Vision(ICCV). New York, USA: IEEE, 2017: 1135-1143.

[25] LI F, TIAN C, ZUO W, et al. Learning spatial-temporal regularized correlation filters for visual tracking[C]//International Conference on Computer Vision and Pattern Recognition(CVPR). New York, USA: IEEE, 2018: 4904-4913.

[26] QU Z, YI W, ZHOU R, et al. Scale self-adaption tracking method of defog-PSA-Kcf defogging and dimensionality reduction of foreign matter intrusion along railway lines[J]. IEEE Access, 2019, 7: 126720-126733.

[27] YAN J, ZHONG L, YAO Y, et al. Dual-template adaptive correlation filter for real-time object tracking[J]. Multimedia Tools and Applications, 2021, 80(2): 2355-2376.

[28] DANELLJAN M, HAGER G, KHAN F S, et al. Discriminative scale space tracking[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2016, 39(8): 1561-1575.

刘超军, 段喜萍, 谢宝文. 应用GhostNet卷积特征的ECO目标跟踪算法改进[J]. 激光技术, 2022, 46(2): 239. LIU Chaojun, DUAN Xiping, XIE Baowen. Improvement of ECO target tracking algorithm based on GhostNet convolution feature[J]. Laser Technology, 2022, 46(2): 239.

关于本站 Cookie 的使用提示

中国光学期刊网使用基于 cookie 的技术来更好地为您提供各项服务,点击此处了解我们的隐私策略。 如您需继续使用本网站,请您授权我们使用本地 cookie 来保存部分信息。
全站搜索
您最值得信赖的光电行业旗舰网络服务平台!