激光与光电子学进展, 2020, 57 (16): 161505, 网络出版: 2020-08-05
卷积通道裁剪与加权融合的精定位视觉跟踪 下载: 844次
Convolutional Channel Pruning and Weighting for Accurate Location Visual Tracking
机器视觉 视觉跟踪 相关滤波 通道裁剪 通道加权 目标精定位 稀疏更新 machine vision visual tracking correlation filter channel pruning channel weighting Object accurate location sparse update
摘要
为提高卷积相关滤波算法的速度和精度,提出一种基于通道裁剪与加权融合的跟踪算法。该算法选取适合目标跟踪的单层卷积特征,通过特征均值比裁剪无效卷积通道,再融合一维灰度特征,提升了特征的表征能力。然后以特征均值比为卷积通道权重构造加权相关滤波算法,预测目标位置,并用帧差均值最小化精定位方法减小预测位置误差;最后通过更新跟踪模型,以进一步提高算法速度。在标准数据集OTB-100上对算法进行测试。结果表明,所提算法的平均距离精度为91.3%,平均速度为31.8 frame/s。所提算法可有效提高目标跟踪的速度和精度,在目标遇到遮挡、尺度变化、快速运动及形变时仍可有效跟踪目标。
Abstract
In this study, a tracking algorithm based on channel pruning and weighting is proposed for improving the speed and accuracy of the convolutional correlation filter algorithm. This algorithm selects the single-layer convolutional features that are suitable for tracking an object. Initially, the feature mean ratio is proposed to prune the inconclusive channels; then, a combination of one-dimensional gray features is used for improving the feature representation. Subsequently, we construct the weighted correlation filter algorithm by considering the feature mean ratios as the convolution channel weight for predicting the target position. Further, an accurate location method based on the minimizing mean frame is used to reduce the prediction location error. Finally, the tracking model is updated to improve the tracking speed. The different algorithms are tested using the OTB-100 dataset. Results show that the average distance precision and the average speed of the proposed algorithm are 91.3% and 31.8 frames/s, respectively. Furthermore, the proposed algorithm can track an object under occlusion, scale variation, fast motion, and deformation in real time, effectively improving the speed and accuracy of object tracking.
车满强, 李树斌, 葛金鹏. 卷积通道裁剪与加权融合的精定位视觉跟踪[J]. 激光与光电子学进展, 2020, 57(16): 161505. Manqiang Che, Shubin Li, Jinpeng Ge. Convolutional Channel Pruning and Weighting for Accurate Location Visual Tracking[J]. Laser & Optoelectronics Progress, 2020, 57(16): 161505.