留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于双流神经网络的煤矿井下人员步态识别方法

刘晓阳 刘金强 郑昊琳

刘晓阳, 刘金强, 郑昊琳. 基于双流神经网络的煤矿井下人员步态识别方法[J]. 矿业科学学报, 2021, 6(2): 218-227. doi: 10.19606/j.cnki.jmst.2021.02.010
引用本文: 刘晓阳, 刘金强, 郑昊琳. 基于双流神经网络的煤矿井下人员步态识别方法[J]. 矿业科学学报, 2021, 6(2): 218-227. doi: 10.19606/j.cnki.jmst.2021.02.010
Liu Xiaoyang, Liu Jinqiang, Zheng Haolin. Gait recognition method of coal mine personnel based on Two-Stream neural network[J]. Journal of Mining Science and Technology, 2021, 6(2): 218-227. doi: 10.19606/j.cnki.jmst.2021.02.010
Citation: Liu Xiaoyang, Liu Jinqiang, Zheng Haolin. Gait recognition method of coal mine personnel based on Two-Stream neural network[J]. Journal of Mining Science and Technology, 2021, 6(2): 218-227. doi: 10.19606/j.cnki.jmst.2021.02.010

基于双流神经网络的煤矿井下人员步态识别方法

doi: 10.19606/j.cnki.jmst.2021.02.010
基金项目: 

国家重点研发计划 2016YFC0801800

国家自然科学基金 51674269

中央高校基本科研业务费专项资金 2020YJSJD11

详细信息
    作者简介:

    刘晓阳(1968-),女,山西临汾人,博士,副教授,硕士生导师,主要从事矿井通信与监控方面的研究工作。Tel:13651333459,E-mail:liuxy1225@163.com

    通讯作者:

    刘金强(1995-),男,内蒙古鄂尔多斯人,硕士研究生,主要从事矿井通信与监控方面的研究工作。Tel:18811139961,E-mail:SQT1800407112@student.cumtb.edu.cn

  • 中图分类号: TP393

Gait recognition method of coal mine personnel based on Two-Stream neural network

  • 摘要: 人脸、指纹和虹膜等生物识别方法在井下复杂环境限制下常常比较模糊,导致基于这些生物特征的煤矿井下人员身份识别率不高。本文在残差神经网络和栈式卷积自动编码器的基础上,提出了一种基于双流神经网络(TS-GAIT)的步态识别方法。主要利用残差神经网络提取步态模式中包含时空信息的动态特征,利用栈式卷积自动编码器提取包含生理信息的静态特征,并采用一种新颖的特征融合方法实现动态特征和静态特征的融合表征。提取的特征对角度、衣着和携带条件具有鲁棒性。在CASIA-B步态数据集和采集的煤矿工人步态数据集(CM-GAIT)上对该方法进行实验评估。结果表明,采用该方法进行煤矿井下人员步态识别是有效可行的,与其他步态识别方法相比准确率有显著提高。
  • 图  1  步态识别关键区域

    Figure  1.  Gait recognition key areas

    图  2  步态能量图(GEI)

    Figure  2.  Gait energy image(GEI)

    图  3  双流神经网络模型框架

    Figure  3.  Architecture of the proposed Two-Stream neural network

    图  4  残差单元

    Figure  4.  Residual unit

    图  5  主流网络框架

    Figure  5.  Mainstream network framework

    图  6  辅助流网络框架

    Figure  6.  Auxiliary stream network framework

    图  7  辅助流网络重构可视化过程

    Figure  7.  Auxiliary stream network reconstruction visualization process

    图  8  CASIA-B数据集

    Figure  8.  CASIA-B dataset

    图  9  CM-GAIT数据集视角步态能量

    Figure  9.  Gait energy image in CM-GAIT dataset

    图  10  井下煤矿工人步态

    Figure  10.  Gait images of underground coal mine personnel

    图  11  跨视角识别率

    Figure  11.  Cross-view recognition rate

    表  1  训练参数

    Table  1.   Training parameters

    参数 优化系数
    批量大小 64
    批次 40
    学习率 0.002
    下载: 导出CSV

    表  2  主流网络参数

    Table  2.   Mainstream network parameters

    网络层 输出尺寸/像素×像素 特征图数量/张 过滤器
    卷积层 128×128 2(1+1)→32 7 × 7卷积,步长1
    池化层 64× 64 32→32 3 × 3最大池化,步长2
    残差单元(1)和(2) 64× 64 32→32 $ \left[\begin{array}{l} 3 \times 3 \text { 卷积 } \\ 3 \times 3 \text { 卷积 } \end{array}\right] \times 2$
    压缩层(1) 32×32 48(16+32)→64 3 × 3卷积,步长2
    残差单元(3)和(4) 32×32 64→64 $ \left[\begin{array}{l} 3 \times 3 \text { 卷积 } \\ 3 \times 3 \text { 卷积 } \end{array}\right] \times 2$
    压缩层(2) 16×16 96(32+64)→128 3 × 3卷积,步长2
    残差单元(5)和(6) 16×16 128→128 ×2
    压缩层(3) 8×8 128→128 3×3卷积,步长2
    输出层 1×1 128→128 8 × 8全局平均池化
    - 128→62 62维全连接层
    下载: 导出CSV

    表  3  辅助流网络参数

    Table  3.   Auxiliary flow network parameters

    网络层 特征图数量/张 过滤器尺寸(像素×像素×张) 步长 批处理规范化 激活函数
    Conv.1 16 2×2×1 2 Y Relu
    Conv.2 32 2×2×16 2 Y Relu
    Conv.3 64 2×2×32 2 Y Relu
    F-Conv.1 64 2×2×32 1/2 Y Relu
    F-Conv.2 32 2×2×16 1/2 Y Relu
    F-Conv.3 16 2×2×1 1/2 Y Relu
    下载: 导出CSV

    表  4  CASIA-B测试集Rank-1步态识别率

    Table  4.   Rank-1 gait recognition rates for CASIA-B test sets

    行走状态 识别率/%
    NM 97.35
    BG 80.21
    CL 45.74
    下载: 导出CSV

    表  5  正常行走状态的多视角识别率(NM05,NM06)

    Table  5.   Multi-view recognition rate under normal walking condition(NM05, NM06) %

    图库角度/(°) 探测角度/(°)
    0 18 36 54 72 90 108 126 144 162 180
    0 99.19 78.22 52.41 28.22 19.35 21.77 25.80 29.03 42.74 59.67 79.03
    18 72.32 100.0 92.74 68.54 31.45 38.71 35.48 29.88 52.41 69.35 58.06
    36 57.26 87.90 96.77 88.70 59.67 48.38 56.83 65.32 57.25 56.45 32.25
    54 35.48 52.41 83.06 96.77 87.09 70.96 73.39 71.77 59.67 42.74 20.16
    72 25.00 45.97 68.87 81.45 96.77 90.32 83.06 77.42 58.06 41.93 24.19
    90 22.58 35.48 54.84 70.96 94.35 97.58 96.77 70.16 56.45 32.25 20.16
    108 25.80 33.87 56.45 61.29 86.29 91.93 95.96 87.09 66.94 28.22 19.35
    126 28.23 37.90 63.71 65.32 72.58 73.38 91.93 96.77 90.32 58.06 29.03
    144 31.45 44.35 58.87 54.35 58.06 55.64 71.77 88.70 98.38 83.87 73.54
    162 54.03 65.32 53.22 44.35 27.42 27.42 44.35 59.67 89.51 99.19 76.66
    180 70.16 53.22 46.77 24.19 15.32 12.09 18.54 28.22 47.58 78.22 100.0
    下载: 导出CSV

    表  6  带包行走状态下的多视角识别率(BG01,BG02)

    Table  6.   Multi-view recognition rate under walking with a bag condition(BG01, BG02)  %

    图库角度/(°) 探测角度/(°)
    0 18 36 54 72 90 108 126 144 162 180
    0 87.09 59.67 26.61 17.07 14.51 14.52 13.90 21.17 37.09 43.08 60.48
    18 55.64 81.45 64.51 44.71 20.96 18.54 18.54 25.80 40.32 46.34 39.51
    36 45.90 72.58 87.90 61.78 40.32 32.26 29.03 37.09 47.58 38.64 32.25
    54 22.58 42.74 58.06 83.73 68.54 50.80 50.00 50.00 45.16 23.57 13.70
    72 20.96 27.41 37.09 57.72 87.09 71.77 66.12 58.87 33.87 23.13 16.12
    90 15.51 23.38 25.00 47.15 76.61 84.67 68.54 58.06 32.25 17.07 13.70
    108 17.74 16.93 25.00 45.52 66.93 75.80 84.67 75.80 45.16 26.51 13.70
    126 19.35 16.12 37.90 47.96 50.00 50.00 63.70 81.45 68.54 36.58 20.16
    144 31.45 38.70 39.51 33.33 25.80 36.29 40.00 63.70 83.06 64.22 33.87
    162 43.54 46.77 34.67 26.01 16.93 12.90 16.12 43.54 62.09 86.99 61.29
    180 59.67 42.74 24.19 16.26 11.29 12.90 12.29 16.93 34.67 55.13 86.29
    下载: 导出CSV

    表  7  穿着大衣行走状态下的多视角识别率(CL01,CL02)

    Table  7.   Multi-view recognition rate under walking in a coat condition(CL01, CL02)  %

    图库角度/(°) 探测角度/(°)
    0 18 36 54 72 90 108 126 144 162 180
    0 44.35 22.58 18.54 12.09 8.06 8.87 9.67 8.06 14.51 20.96 25.80
    18 29.03 42.74 34.67 23.38 9.67 9.68 11.29 16.93 21.77 28.22 26.61
    36 17.74 37.90 51.61 42.74 25.00 22.58 16.93 26.61 29.03 29.03 27.41
    54 12.09 25.80 37.90 55.64 38.70 26.61 32.25 33.87 31.45 23.38 25.00
    72 9.67 25.00 37.90 43.54 56.45 43.54 41.12 39.51 30.64 21.77 17.74
    90 13.70 21.77 28.22 40.32 48.38 58.87 45.96 49.19 29.83 20.16 15.32
    108 9.67 22.58 29.03 34.67 39.51 45.97 48.38 51.61 34.67 18.54 12.90
    126 11.29 18.54 29.83 25.80 33.87 25.00 33.87 61.29 53.22 34.67 21.74
    144 20.16 25.00 30.64 23.38 17.74 19.35 23.38 37.09 52.41 47.58 22.58
    162 25.00 29.83 23.38 16.12 12.90 14.51 18.54 23.38 39.51 55.64 35.48
    180 34.67 21.77 19.35 8.87 6.45 5.65 8.06 12.09 19.35 28.22 45.96
    下载: 导出CSV

    表  8  CM-GAIT测试集Rank-1步态识别率

    Table  8.   Rank-1 gait recognition rates for CM-GAIT test sets  %

    工种 18° 54° 90° 平均识别率
    采煤工 90.00 90.00 100.0 93.33
    液压支架工 80.00 100.0 100.0 93.33
    采煤机司机 80.00 90.00 90.00 86.67
    全部 83.33 93.33 96.67 91.11
    下载: 导出CSV

    表  9  同视角识别率

    Table  9.   Identical-view recognition rate  %

    方法 Probe NM Probe BG Probe CL 平均识别率
    PCA+GEI 98.45 56.23 16.93 57.20
    CNNs 97.56 68.32 35.61 67.16
    GaitGAN 98.75 72.73 41.50 70.99
    ResNet 97.72 78.56 46.98 74.42
    SCAE 96.18 67.11 35.04 66.11
    TS-GAIT 97.94 85.85 52.12 78.64
    下载: 导出CSV
  • [1] 柴艳妹, 夏天, 韩文英, 等. 步态识别研究进展[J]. 计算机科学, 2012, 39(6): 10-15, 46. doi: 10.3969/j.issn.1002-137X.2012.06.003

    Chai Yanmei, Xia Tian, Han Wenying, et al. State-of-the-art on gait recognition[J]. Computer Science, 2012, 39(6): 10-15, 46. doi: 10.3969/j.issn.1002-137X.2012.06.003
    [2] 向斓. 基于关节点提取和多视角步态识别算法[D]. 武汉: 武汉理工大学, 2008.
    [3] 张善文, 张传雷, 黄文准. 基于最大最小判别映射的煤矿井下人员身份鉴别方法[J]. 煤炭学报, 2013, 38(10): 1894-1899. https://www.cnki.com.cn/Article/CJFDTOTAL-MTXB201310035.htm

    Zhang Shangwen, Zhang Chuanlei, Huang Wenzhun. Personnel identification in mine underground based on maximin discriminant projection[J]. Journal of China Coal Society, 2013, 38(10): 1894-1899. https://www.cnki.com.cn/Article/CJFDTOTAL-MTXB201310035.htm
    [4] 刘晓阳, 梁涛, 胡乔森. 基于无线传感器网络的井下容错性定位方法[J]. 矿业科学学报, 2017, 2(2): 167-174. http://kykxxb.cumtb.edu.cn/CN/abstract/abstract60.shtml

    Liu Xiaoyang, Liang Tao, Hu Qiaosen. Fault-tolerant localization method for underground mine based on wireless sensor network[J]. Journal of Mining Science and Technology, 2017, 2(2): 167-174. http://kykxxb.cumtb.edu.cn/CN/abstract/abstract60.shtml
    [5] 张帆, 李亚杰, 孙晓辉. 无线感知与视觉融合的井下目标跟踪定位方法[J]. 矿业科学学报, 2018, 3(5): 484-491. http://kykxxb.cumtb.edu.cn/CN/abstract/abstract175.shtml

    Zhang Fan, Li Yajie, Sun Xiaohui. A novel method of mine target tracking and location based on wireless sensor and visual recognition[J]. Journal of Mining Science and Technology, 2018, 3(5): 484-491. http://kykxxb.cumtb.edu.cn/CN/abstract/abstract175.shtml
    [6] 吴雅琴, 杨硕, 师兰兰. 基于位置指纹与PDR融合的室内定位算法研究[J]. 矿业科学学报, 2019, 4(5): 448-454. http://kykxxb.cumtb.edu.cn/CN/abstract/abstract245.shtml

    Wu Yaqin, Yang Shuo, Shi Lanlan. Research on indoor positioning algorithm based on location fingerprint and PDR[J]. Journal of Mining Science and Technology, 2019, 4(5): 448-454. http://kykxxb.cumtb.edu.cn/CN/abstract/abstract245.shtml
    [7] 赵喜玲, 张晓惠. 基于动态特征和静态特征融合的步态识别方法[J]. 湘潭大学自然科学学报, 2017, 39(004): 89-91. https://www.cnki.com.cn/Article/CJFDTOTAL-XYDZ201704020.htm

    Zhao xiling Zhang Xiaohui. Gait recognition based on dynamic and static feature fusion[J]. Natural Science Journal of Xiangtan University, 2017, 39(4): 89-90. https://www.cnki.com.cn/Article/CJFDTOTAL-XYDZ201704020.htm
    [8] Liu Lingfeng, Jia Wei, Zhu Yihai.Gait recognition using hough transform and principal component analysis[C]//Emerging Intelligent Computing Technology and Applications.Berlin, Heidelberg: Springer Berlin Heidelberg, 2009: 363-370.
    [9] Wu Zifeng, Huang Yongzhen, Wang Liang et al. A Comprehensive study on cross-view gait based human identification with deepcnns[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(2): 209-226. doi: 10.1109/TPAMI.2016.2545669
    [10] Yu Shiqi, Chen Haifeng, Reyes E B G, et al. Gaitgan: invariant gait feature extraction using generative adversarial networks[C]// In Proceedings of the 2017 IEEE Conference Computer Vision and Pattern Recognition(CVPRW).Honolulu, HI, 2017: 532-539.
    [11] Chao Hanqing, He Yiwei, Zhang Junping, et al. Gaitset: regarding gait as a set for cross-view gait recognition[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2019, 33: 8126-8133. doi: 10.1609/aaai.v33i01.33018126
    [12] He Kaiming, Zhang Xiangyu.Deep residual learning for image recognition[C]// In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Miami, FL: 2016: 770-778.
    [13] Masci J, Meier U, Ciresan D, et al. Stacked convolutional auto-encoders for hierarchical feature extraction[J]. Lecture Notes in Computer Science, 2011, 6791: 52-59. doi: 10.1007/978-3-642-21735-7_7.pdf
    [14] Tao Yiting, Xu Miaozhong, Zhong Yanfei. Gan-assisted two-stream neural network for high-resolution remote sensing image classification[J]. Remote Sensing, 2017, 9(12): 1328-1357. doi: 10.3390/rs9121328
    [15] Han Ju, Bhanu B.Individual recognition using gait energy image[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006, 28(2): 316-322. doi: 10.1109/TPAMI.2006.38
    [16] Glorot X, Bengio Y.Understanding the difficulty of training deep feedforward neural networks[J]. Journal of Machine Learning Research, 2010, 9(1): 249-256. http://www.researchgate.net/publication/215616968_Understanding_the_difficulty_of_training_deep_feedforward_neural_networks
    [17] He Kaiming, Zhang Xiangyu, Ren Shaoqing, et al. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification[C]// ICCV'15: Proceedings of the 2015 IEEE International Conference on Computer Vision(ICCV)IEEE, 2015: 1026-1034.
    [18] Liu Yishu, Liu Yingbin, Ding Liwang. Scene classification based on two-stage deep feature fusion[J]. IEEE Geoscience and Remote Sensing Letters, 2018, 15(2): 183-186. doi: 10.1109/LGRS.2017.2779469
    [19] 李昊璇, 王芬. 基于深度残差网络的人脸关键点检测[J]. 测试技术学报, 2019, 33(6): 516-519, 546. doi: 10.3969/j.issn.1671-7449.2019.06.012

    Li Haoxuan, Wang Fen. Facial keypoints detection based on deep residual network[J]. Journal of Test and Measurement Technology, 2019, 33(6): 516-519, 546. doi: 10.3969/j.issn.1671-7449.2019.06.012
    [20] He Kaiming, Zhang Xiangyu, Ren Shaoqing, et al. Identity mappings in deep residual networks[C]//Computer Vision-ECCV 2016.Cham: Springer International Publishing, 2016: 630-645.
    [21] Yoo D, Kim N, Park S, et al. Pixel-level domain transfer[EB/OL]. [2016-03-24]arXiv: 1603.07442[cs.CV]. https://arxiv.org/abs/1603.07442.
    [22] Yu Shiqi, Tan Daoliang, Tan Tieniu.A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition[C]// In Proceedings of the IEEE of 18th International Conference on Pattern Recognition(ICPR), Hong Kong: 2006: 441-444.
  • 加载中
图(11) / 表(9)
计量
  • 文章访问数:  444
  • HTML全文浏览量:  212
  • PDF下载量:  22
  • 被引次数: 0
出版历程
  • 收稿日期:  2020-05-21
  • 修回日期:  2020-10-07
  • 刊出日期:  2021-04-07

目录

    /

    返回文章
    返回