A Framework for Personalized Human Activity Recognition


Eri˙ş H. A., ERTÜRK M. A., Aydln M. A.

International Journal of Pattern Recognition and Artificial Intelligence, cilt.37, sa.10, 2023 (SCI-Expanded, Scopus) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 37 Sayı: 10
  • Basım Tarihi: 2023
  • Doi Numarası: 10.1142/s0218001423560165
  • Dergi Adı: International Journal of Pattern Recognition and Artificial Intelligence
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, Aerospace Database, Applied Science & Technology Source, Business Source Elite, Business Source Premier, Communication Abstracts, Compendex, Computer & Applied Sciences, Metadex, Civil Engineering Abstracts
  • Anahtar Kelimeler: CNN, Human activity recognition (HAR), LSTM, personalized activity recognition, RNN
  • İstanbul Üniversitesi-Cerrahpaşa Adresli: Evet

Özet

In today's world, Human Activity Recognition (HAR) through video streams is actively used in every aspect of our life, such as automated surveillance systems and sports statistics are computed according to the videos with the help of HAR. Activity detection is not a new subject, and several methods are available. However, the most recent and most promising techniques rely on Convolutional Neural Networks (CNNs). CNNs primary usage is based on a single image frame to perform logical or categorical identification of an object, scene, or activity. We exploit this feature to adapt CNN on video streams to achieve HAR. In this study, we present a Personalized HAR (PHAR) framework that increases activity recognition accuracy with Object Detection (OD). First, we demonstrate the state-of-the-art HAR and OD methods in the literature. Then we illustrate our framework with two new Single Person Human Activity Recognition models. Finally, the performance of the new framework is evaluated with the well-known activity detection methods. Results show that our new PHAR model with 95% accuracy ratio outperforms the CNN-LSTM-based reference model (90%). Moreover, a new metric Average Accuracy Score (AAS) is described in this study, PHAR models approximately have 94% AAS, which is better than the reference model with 89% AAS.