Opto-Electronic Advances, 2020, 3 (9): 09190018, Published Online: Jan. 8, 2021  

Visual tracking based on transfer learning of deep salience information Download: 529次

Haorui Zuo 1,2,3,*Zhiyong Xu 1,2,3Jianlin Zhang 1,2Ge Jia 1,2
Author Affiliations
1 Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
2 University of Chinese Academy of Sciences, Beijing 100049, China
3 Key Laboratory of Optical Engineering, Chinese Academy of Sciences, Chengdu 610209, China
Abstract
In this paper, we propose a new visual tracking method in light of salience information and deep learning. Salience detection is used to exploit features with salient information of the image. Complicated representations of image features can be gained by the function of every layer in convolution neural network (CNN). The characteristic of biology vision in attention-based salience is similar to the neuroscience features of convolution neural network. This motivates us to improve the representation ability of CNN with functions of salience detection. We adopt the fully-convolution networks (FCNs) to perform salience detection. We take parts of the network structure to perform salience extraction, which promotes the classification ability of the model. The network we propose shows great performance in tracking with the salient information. Compared with other excellent algorithms, our algorithm can track the target better in the open tracking datasets. We realize the 0.5592 accuracy on visual object tracking 2015 (VOT15) dataset. For unmanned aerial vehicle 123 (UAV123) dataset, the precision and success rate of our tracker is 0.710 and 0.429.

Haorui Zuo, Zhiyong Xu, Jianlin Zhang, Ge Jia. Visual tracking based on transfer learning of deep salience information[J]. Opto-Electronic Advances, 2020, 3(9): 09190018.

引用该论文: TXT   |   EndNote

相关论文

加载中...

关于本站 Cookie 的使用提示

中国光学期刊网使用基于 cookie 的技术来更好地为您提供各项服务,点击此处了解我们的隐私策略。 如您需继续使用本网站,请您授权我们使用本地 cookie 来保存部分信息。
全站搜索
您最值得信赖的光电行业旗舰网络服务平台!