Modal Frequencies Based Human Action Recognition Using Silhouettes And Simplicial Elements

Document Type : Original Article

Authors

1 Delhi Technological University, Department of Electronics & Communication, Bawana Road, Delhi, India

2 IGDTUW, Department of Information Technology, Kashmere Gate, Delhi, India

Abstract

Human action recognition has been a pioneer research problem among the researchers. Feature descriptors are categorized into two categories: global and local. The disadvantage of global feature descriptors is that global features only give the structural information of the action whereas disadvantage of local descriptor is they give only motion information of the action. As a result, the recognition rate gets affected. To improve the recognition rate, hybrid descriptors are also used. But the disadvantage of hybrid descriptors is that they increase the complexity of the descriptor as both global and local features have to be fused. To overcome both the issues we proposed a new local feature descriptor in terms of modal frequency using silhouette and simplicial elements of a silhouette with the help of Finite Element Analysis (FEA). This local descriptor represents the distinctive human poses in the form of modal frequency. These modal frequencies are subject to the stiffness matrix of the body that is associated with the displacement. The silhouettes of the human body are used for the analysis. These silhouettes are represented into simplicial elements. The modal frequencies of silhouettes are calculated using simplicial elements. These modal frequencies of the silhouette are used as the feature vectors that are given to the Radial Basis Function-Support Vector Machine (RBF-SVM) classifier. The challenging datasets Weizmann, KTH and IXMAS are used for validation of the proposed methodology

Keywords


  1. Bobick, F., Davis.J. W. “The recognition of human movement using temporal templates.” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 33, No. 6, (2001), 257-267[doi: 10.1109/34.910878].
  2. , Babbs J. “Learning the viewpoint manifold for action recognition.”IEEE International Conference on Computer Vision and Pattern Recognition (CVPR’08),(2008), 1-7, doi: 10.1109/CVPR.2008.4587552.
  3. Rahman S.A., Song I., Leung M.H.K., Lee I., Lee K. “Fast action recognition using negative space features.” Expert Systems Applications, 41. No. 2, (2014), 574-587, https://doi.org/10.1016/j.eswa.2013.07.082
  4. Gorelick L., Blank M., Shechtman E., Irani M., Basri R. “Action as space-time shapes.” IEEE Transaction on Pattern Analysis and Machine Intelligence, 29, No. 12, (2017), 2247-2253, doi:10.1109/TPAMI.2007.70711
  5. Grundmann M, Meier F., Essa I. “3D shape context and distance transform for action recognition.”19thInternational Conference on Pattern Recognition (ICPR’08), Tampa, FL, (2008), 1-4[doi:10.1109/ICPR.2008.4761435]
  6. Laptev I., Marszalek M., Schmid C., Rozenfeld B. “Learning realistic human actions from movies.” IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, 2008, 1-8, doi: 10.1109/CVPR.2008.4587756
  7. Wang Y., Mori G. “Human action recognition using semi-latent topic models.” IEEE Transactions on Pattern Analysis and Machine Intelligence, 31, No. 10, (2009), 1762-1764, doi: 10.1109/TPAMI.2009.43
  8. Wu X., Xu D., Duan L.,Luo J. “Action recognition using context and appearance distribution features.” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, 2011, 489-496, doi:10.1109/CVPR.2011.5995624
  9. Iosifidis A., Tefas A., Pitas I. “Discriminant bag of wordsbased representation for human action recognition.” Pattern Recognition Letters, 49, No. 1, (2014), 185-192, https://doi.org/10.1016/j.patrec.2014.07.011
  10. Liu L., Shao L., Li X., Lu K. “Learning spatio-temporal representations for action recognition: A genetic programming approach.” IEEE Transactions on Cybernetics, 46, No. 1, (2016), 158-170, doi: 10.1109/TCYB.2015.2399172
  11. Wang J., Zheng H., Gao J., Cen J. “Cross-view action recognition based on a statistical translation framework.” IEEE Transactions on Circuits and Systems for Video Technology, 26, No. 8, (2016), 1461-1475, doi: 10.1109/TCSVT.2014.2382984.
  12. Fu Y., Zhang T., Wang W. “Sparse coding-based space-time video representation for action recognition.” Multimedia Tools and Applications, 76, No. 10, (2017), 12645-12658 https://doi.org/10.1007/s11042-016-3630-9
  13. Gomez-Conde I., Olivieri D.N. A KPCA spatio-temporal differential geometric trajectory cloud classifier for recognizing human actions in a CBVR system,” Expert Systems Applications, 42, No. 13, (2015), 5472-5490 https://doi.org/10.1016/j.eswa.2015.03.010.
  14. Mishra O., Kapoor R., Tripathi M.M. “Human Action Recognition Using Modified Bag of Visual Word based on Spectral Perception.” International Journal of Image, Graphics and Signal Processing, 11, No. 9, (2019), 34-43 https://doi.org/10.5815/ijigsp.2019.09.04.
  15. Kapoor, R., Mishra, O. & Tripathi, M.M. “Anomaly detection in group activities based on fuzzy lattices using Schrödinger equation.” Iran Journal of Computer Science, 3, No. 2, (2020), 103-114. https://doi.org/10.1007/s42044-019-00045-y.
  16. Mishra O., Kavimandan P.S., Tripathi M.M., Kapoor R., Yadav K. “Human Action Recognition Using a New Hybrid Descriptor.”In: Harvey D., Kar H., Verma S., Bhadauria V. (eds) Advances in VLSI, Communication, and Signal Processing. Lecture Notes in Electrical Engineering, Vol 683, (2021) Springer, Singapore. https://doi.org/10.1007/978-981-15-6840-4_43
  17. Wu D., Shao L. “Silhouette analysis-based action recognition via exploiting human poses.” IEEE Transactions on Circuits and Systems for Video Technology, 23, No. 2, (2013), 236-243, doi: 10.1109/TCSVT.2012.2203731.
  18. Touati R., Mignotte M. “MDS-based multi-axial dimensionality reduction model for human action recognition.”Canadian Conference on Computer and Robot Vision, 2014, 262-267, doi: 10.1109/CRV.2014.42.
  19. Weinland D., Özuysal M., Fua P. “Making action recognition robust to occlusions and viewpoint changes”In: Daniilidis K., Maragos P., Paragios N. (eds) Computer Vision - ECCV 2010, Lecture Notes in Computer Science, (2010), 6313, 635-648 https://doi.org/10.1007/978-3-642-15558-1_46.
  20. Xia L.M., Huang J.X., Tan L.Z. “Human action recognition based on chaotic invariants”, Journal of Central University, 20, No. 11, (2013), 3171-3179, https://doi.org/10.1007/s11771-013-1841-z
  21. Kapoor R., Mishra O., Tripathi M.M. “Human action recognition using descriptor based on selective finite element ”, Journal of Electrical Engineering, Vol. 70, No. 6, 2019, 443-453, doi: https://doi.org/10.2478/jee-2019-0077.

 

 

 

 

 

 

 

 

  1. Kavimandan, P.S., Kapoor R., Yadav K.“Human Action Recognition using Prominent Camera,” International Journal of Engineering, B: Applications, 34, No. 02, (2021), 427-432. doi: https://dx.doi.org/10.5829/ije.2021.34.02b.14
  2. Chaaraoui A.A., Pérez P.C., Florez-Revuelta F.”Silhouette-based human action recognition using sequences of key poses,” Pattern Recognition Letters, 34, No. 15, (2013), 1799-1807. https://doi.org/10.1016/j.patrec.2013.01.021.
  3. Goudelis G., Karpouzis K., Kollias S. “Exploring trace transform for robust human action recognition.” Pattern Recognition, 46, No. 12, (2013), 3238-3248 https://doi.org/10.1016/j.patcog.2013.06.006.
  4. Lei J., Li G., Zhang J.,Guo Q., Tu D. “Continuous action segmentation and recognition using hybrid convolutional neural network-hidden Markov model.” IET Computer Vision, 10, No. 6, (2016), 537-544, http://dx.doi.org/10.1049/iet-cvi.2015.0408.
  5. Liu H., Shu N., Tang Q., Zhang W. “Computational model based on the neural network of visual cortex for human action recognition.” IEEE Transactions on Neural Networks and Learning Systems, 29, No. 5, (2018), 1427-1440, doi: 10.1109/TNNLS.2017.2669522.
  6. Shi Y., Tian Y.,Wang Y., Huang T. “Sequential deep trajectory descriptor for action recognition with three-stream CNN.” IEEE Transactions on Multimedia, 19, No. 7, (2017), 1510-1520, doi: 10.1109/TMM.2017.2666540.
  7. Dou J., Li J. “Robust human action recognition based on spatiotemporal descriptors and motion temporal templates.” Optik, 125, No. 7, (2014), 1891-1896, https://doi.org/10.1016/j.ijleo.2013.10.022.
  8. Laptev I., Lindeberg T. “Space-time interest points.” Proceedings Ninth IEEE International Conference on Computer Vision, Nice, France, (2003), 432-439, doi: 10.1109/ICCV.2003.1238378.
  9. Chittora A., Mishra O., “Face Recognition Using RBF Kernel Based Support Vector Machine,” International Journal of Future Computer and Communication, 1, No. 3, (2012), 280-283, doi: 10.7763/IJFCC.2012.V1.75
  10. Han H., Li X.J. “Human action recognition with sparse geometric features.”The Imaging Science Journal, 63, No. 1, (2015), 45-53, doi: 10.1179/1743131X14Y.0000000091.
  11. Liu L., Shao L., Li X. “Learning spatio-temporal representations for action recognition: A genetic programming approach.” IEEE Transactions on Cybernetics, 46, No. 1, (2016), 158-170 [https://doi.org/10.1016/B978-0-12-818597-1.50072-2].
  12. Mosabbeb E.A., RaahemifarK., Fathy M. “Multi-view human activity recognition in distributed camera sensor networks.” Sensors, 13, No. 7, (2013), 8750-8770. https://dx.doi.org/10.3390%2Fs130708750
  13. Hosseini M.S.,Ghaderi F., “A hybrid deep learning Architecture Using 3-D CNNs and CRUs for Human Action Recognition.” International Journal of Engineering, B: Applications, Transactions B: Applications, 33, No. 5, (2020), 959-965. doi: 10.5829/IJE.2020.33.05B.29