首页 > 论文 > 激光与光电子学进展 > 56卷 > 21期(pp:211503--1)

基于卷积神经网络的足跟着地事件检测算法

Heel-Strike Event Detection Algorithm Based on Convolutional Neural Networks

  • 摘要
  • 论文信息
  • 参考文献
  • 被引情况
  • PDF全文
分享:

摘要

为解决基于可穿戴传感器的步态事件检测技术对个体配合程度依赖性大、能耗高、应用条件苛刻等问题,提出一种基于机器视觉的足跟着地事件检测算法,可以在不需要参与者合作的情况下,利用普通摄像机实现对足跟着地事件的精确检测。提出一种新颖的特征,即连续轮廓帧差图(CSD-maps)来表达步态模式。一个连续轮廓帧差图可以将视频帧中行人连续的轮廓二值图编码到一张特征图中,使其蕴含丰富的步态时空信息。不同数量的行人连续轮廓帧差会产生不同的连续轮廓帧差图。利用卷积神经网络对连续轮廓帧差图进行特征提取和足跟着地事件分类。在公开数据库上,对124名受试者在5个视角下不同穿着状态的视频数据进行训练和测试,实验结果表明,该方法具有良好的检测精度,识别准确率达93%以上。

Abstract

In this study, we propose an algorithm based on machine vision to detect heel-strike events for solving the problem that the gait recognition technology based on wearable sensors is considerably dependent on the cooperation of participants, with high energy consumption and harsh application conditions. The proposed algorithm can accurately detect heel-strike events using ordinary cameras without the cooperation of participants. Initially, we develop an innovative feature for representing gait patterns by designing a consecutive-silhouette difference map (CSD-map). A CSD-map can encode the binary image of several consecutive pedestrian contours extracted from the video frames and output the combination as a single feature map. Different numbers of consecutive pedestrian contour differences result in different types of CSD-map. Further, a convolutional neural network is used for feature extraction and classification of the imaged heel-strike events. In a public database of video data for training and testing, we find 124 individuals under five angles in different wear conditions, and the experimental results obtained using these images denote the accuracy of our method. The identification accuracy is observed to be greater than 93%.

中国激光微信矩阵
补充资料

中图分类号:TP29

DOI:10.3788/LOP56.211503

所属栏目:机器视觉

基金项目:国家重点研发计划、国家自然科学基金、上海市现场物证重点实验室开放课题基金;

收稿日期:2019-04-01

修改稿日期:2019-05-06

网络出版日期:2019-11-01

作者单位    点击查看

李卓容:中国人民公安大学刑事科学技术学院, 北京 100038
王凯旋:中国人民公安大学刑事科学技术学院, 北京 100038
何欣龙:中国人民公安大学刑事科学技术学院, 北京 100038
糜忠良:上海市现场物证重点实验室, 上海 200083
唐云祁:中国人民公安大学刑事科学技术学院, 北京 100038

联系人作者:唐云祁(tangyunqi@ppsuc.edu.cn)

备注:国家重点研发计划、国家自然科学基金、上海市现场物证重点实验室开放课题基金;

【1】Muro-de-la-Herran A, Garcia-Zapirain B and Mendez-Zorrilla A. Gait analysis methods: an overview of wearable and non-wearable systems, highlighting clinical applications. Sensors. 14(2), 3362-3394(2014).

【2】Giroux M, Moissenet F and Biomedical Engineering. 16 sup1:. 152-154(2013).

【3】Rose J and Gamble J G. Human walking. 2nd ed. Baltimore: Williams & Wilkins. (1994).

【4】Yang C, Ugbolue U, Carse B et al. Multiple marker tracking in a single-camera system for gait analysis. [C]∥2013 IEEE International Conference on Image Processing, September 15-18, 2013, Melbourne, VIC, Australia. New York: IEEE. 3128-3131(2013).

【5】Huang B F, Chen M, Shi X et al. Gait event detection with intelligent shoes. [C]∥2007 International Conference on Information Acquisition, July 8-11, 2007, Seogwipo-si, Korea. New York: IEEE. 579-584(2007).

【6】Catalfamo P, Moser D, Ghoussayni S et al. Detection of gait events using an F-Scan in-shoe pressure measurement system. Gait & Posture. 28(3), 420-426(2008).

【7】Heliot R, Pissard-Gibollet R, Espiau B et al. Continuous identification of gait phase for robotics and rehabilitation using microsensors. [C]∥ICAR ''''05. Proceedings., 12th International Conference on Advanced Robotics, 2005, July 18-20, 2005, Seattle, WA, USA. New York: IEEE. 686-691(2005).

【8】Williamson R and Andrews B J. Gait event detection for FES using accelerometers and supervised machine learning. IEEE Transactions on Rehabilitation Engineering. 8(3), 312-319(2000).

【9】Hanlon M and Anderson R. Real-time gait event detection using wearable sensors. Gait & Posture. 30(4), 523-527(2009).

【10】Tang S, Wang X Y, Lü X T et al. Histogram of oriented normal vectors for object recognition with a depth sensor. ∥Lee K M, Matsushita Y, Rehg J M, et al. European conference on computer vision-ACCV 2012. Lecture notes in computer science. Berlin, Heidelberg: Springer. 7725, 525-538(2013).

【11】Huang C H. Research on algorithms of human action recognition based on videos. Chengdu: University of Electric Science and Technology of China. (2016).
黄成挥. 基于视频的人体行为识别算法研究. 成都: 电子科技大学. (2016).

【12】Wang J. Human activity recognition using multiple instance learning. Information Technology. 40(7), 65-70(2016).
王军. 基于多示例学习法的人体行为识别. 信息技术. 40(7), 65-70(2016).

【13】Liu Z and Dong S D. Study of human action recognition by using skeleton motion information in depth video. Computer Applications and Software. 34(2), 189-192, 219(2017).
刘智, 董世都. 利用深度视频中的关节运动信息研究人体行为识别. 计算机应用与软件. 34(2), 189-192, 219(2017).

【14】Zhang Y J, Wang H M, Fu X H et al. Identification of steel plate damage position based on particle swarm support vector machine. Chinese Journal of Lasers. 44(10), (2017).
张燕君, 王会敏, 付兴虎 等. 基于粒子群支持向量机的钢板损伤位置识别. 中国激光. 44(10), (2017).

【15】Liu F, Shen T S and Ma X X. Convolutional neural network based multi-band ship target recognition with feature fusion. Acta Optica Sinica. 37(10), (2017).
刘峰, 沈同圣, 马新星. 特征融合的卷积神经网络多波段舰船目标识别. 光学学报. 37(10), (2017).

【16】Bi L H and Liu Y C. Plant leaf image recognition based on improved neural network algorithm. Laser & Optoelectronics Progress. 54(12), (2017).
毕立恒, 刘云潺. 基于改进神经网络算法的植物叶片图像识别研究. 激光与光电子学进展. 54(12), (2017).

【17】Barnich O and van Droogenbroeck M. ViBe: a universal background subtraction algorithm for video sequences. IEEE Transactions on Image Processing. 20(6), 1709-1724(2011).

【18】Wang H Z and Suter D. A consensus-based method for tracking: modelling background scenario and foreground appearance. Pattern Recognition. 40(3), 1091-1105(2007).

【19】Hofmann M, Tiefenbacher P and Rigoll G. Background segmentation with feedback:the pixel-based adaptive segmenter. [C]∥2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, June 16-21, 2012, Providence, RI, USA. New York: IEEE. 38-43(2012).

【20】Benenson R, Omran M, Hosang J et al. Ten years of pedestrian detection, what have we learned . ∥Agapito L, Bronstein M, Rother C. European conference on computer vision-ECCV 2014 Workshops. Lecture notes in computer science. Cham: Springer. 8926, 613-627(2015).

【21】Li J N, Liang X D, Shen S M et al. Scale-aware fast R-CNN for pedestrian detection. IEEE Transactions on Multimedia. 20(4), 985-996(2018).

【22】Zhang L L, Lin L, Liang X D et al. Is faster R-CNN doing well for pedestrian detection . ∥Leibe B, Matas J, Sebe N, et al. European conference on computer vision-ECCV 2016. Lecture notes in computer science. Cham: Springer. 9906, 443-457(2016).

【23】LeCun Y, Kavukcuoglu K and Farabet C. Convolutional networks and applications in vision. [C]∥Proceedings of 2010 IEEE International Symposium on Circuits and Systems, May 30-June 2, 2010, Paris, France. New York: IEEE. 253-256(2010).

【24】Krizhevsky A, Sutskever I and Hinton G E. ImageNet classification with deep convolutional neural networks. [C]∥Advances in neural information processing systems 25 (NIPS 2012), December 3-8, 2012, Harrahs and Harveys, Lake Tahoe. New York: NIPS. 1097-1105(2012).

【25】Girshick R, Donahue J, Darrell T et al. Rich feature hierarchies for accurate object detection and semantic segmentation. [C]∥2014 IEEE Conference on Computer Vision and Pattern Recognition, June 23-28, 2014, Columbus, OH, USA. New York: IEEE. 580-587(2014).

【26】Farabet C, Couprie C, Najman L et al. Learning hierarchical features for scene labeling. IEEE Transactions on Pattern Analysis and Machine Intelligence. 35(8), 1915-1929(2013).

【27】Schroff F, Kalenichenko D and Philbin J. FaceNet:a unified embedding for face recognition and clustering. [C]∥2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 7-12, 2015, Boston, MA, USA. New York: IEEE. 815-823(2015).

【28】Yu S Q, Tan D L and Tan T N. A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition. [C]∥18th International Conference on Pattern Recognition (ICPR''''06), August 20-24, 2006, Hong Kong, China. New York: IEEE. 441-444(2006).

【29】Jia Y Q, Shelhamer E, Donahue J et al. Caffe: convolutional architecture for fast feature embedding. [C]∥Proceedings of the 22nd ACM international conference on Multimedia, November 3-7, 2014, Orlando, Florida, USA. New York: ACM. 675-678(2014).

引用该论文

Li Zhuorong,Wang Kaixuan,He Xinlong,Mi Zhongliang,Tang Yunqi. Heel-Strike Event Detection Algorithm Based on Convolutional Neural Networks[J]. Laser & Optoelectronics Progress, 2019, 56(21): 211503

李卓容,王凯旋,何欣龙,糜忠良,唐云祁. 基于卷积神经网络的足跟着地事件检测算法[J]. 激光与光电子学进展, 2019, 56(21): 211503

被引情况

【1】董吉富,刘畅,曹方伟,凌源,高翔. 基于注意力机制的在线自适应孪生网络跟踪算法. 激光与光电子学进展, 2020, 57(2): 21510--1

【2】王凯旋,李卓容,王晓宾,严圣东,唐云祁. 刑事案件现场图自动分类算法. 激光与光电子学进展, 2020, 57(4): 41009--1

您的浏览器不支持PDF插件,请使用最新的(Chrome/Fire Fox等)浏览器.或者您还可以点击此处下载该论文PDF