119 / 2019-06-15 08:26:25
Multi-information Complementarity Neural Network for Multi-modal Action Recognition
action recognition,complementary information,two stream network,weighted fusion
全文待审
Chuan Ding / Zhengzhou University
Yun Tie / Zhengzhou University
Lin Qi / Zhengzhou University
Multi-modal methods play an important role on action recognition. Each modal can extract different features to analyze the same motion classification. But numbers of researches always separate the one task from the others, which result in the unreasonable utilization of complementary information in the multi-modality data. Skeleton is robust to the variation of illumination, backgrounds and viewpoints, while RGB has better performance in some circumstances when there are other objects that have great effect on recognition of action, such as drinking water and eating snacks. In this paper, we propose a novel Multi-information Complementarity Neural Network (MiCNN) for human action recognition to address this problem. The proposed MiCNN can learn the features from both skeleton and RGB data to ensure the abundance of information. Besides, we design a weighted fusion block to distribute the weights reasonably, which can make each modal draw on their respective strengths. The experiments on NTU- RGB+D datasets demonstrate the excellent performance of our scheme, which are superior to other methods that we have ever known.
重要日期
  • 会议日期

    10月09日

    2019

    10月10日

    2019

  • 07月20日 2019

    初稿截稿日期

  • 10月10日 2019

    注册截止日期

主办单位
Xi’an Jiaotong University
历届会议
移动端
在手机上打开
小程序
打开微信小程序
客服
扫码或点此咨询