Feasibility of Human Activity Recognition Using PPG and 1D CNN

In an article recently published in the journal Sensors, researchers investigated the feasibility of photoplethysmography (PPG)-based human activity recognition (HAR) using convolutional neural networks (CNNs).

Study: Feasibility of Human Activity Recognition Using PPG and 1D CNN. Image credit: NicoElNino/Shutterstock
Study: Feasibility of Human Activity Recognition Using PPG and 1D CNN. Image credit: NicoElNino/Shutterstock

PPG for HAR

HAR involves the automatic detection of different regular physical activities performed by individuals using various devices, such as cameras or sensors. HAR can be categorized into internal and external sensor-based approaches based on the sensing method used for the purpose. Among them, internal sensors, specifically wearable technologies, have received significant attention due to their simplicity and lightweight nature.

Recently, HAR systems based on wearable biometric signals, such as PPG and electrocardiography (ECG), have been introduced due to the rising focus on healthcare devices. Although wearable devices that are capable of running ECG are suitable for HAR applications, they require disposable electrodes, leading to additional costs and inconvenience.

PPG can be a suitable alternative to measure cardiovascular rhythm and heart rate by detecting changes in vascular tissue's slight absorption as the blood flow alters due to the cardiac cycle. Pulse oximeter sensors embedded in several off-the-shelf, wearable devices like smartwatches can be utilized to measure PPG. However, PPG signals have not been used extensively for HAR systems. They were used only as a supplement to ECG or IMU signals until now.

The proposed approach

In this study, researchers proposed and evaluated a HAR system leveraging PPG signals collected from 40 participants engaged in performing five common daily activities to facilitate the practical application of PPG. The gathered data was pre-processed and then classified using a one-dimensional convolutional neural network (1D CNN)-based end-to-end model to assess the feasibility of classifying these human activities using deep learning (DL) architecture.

Forty healthy participants, including 20 females and 20 males, with an average age of 23.95 years and a weight of 63.7 kg, were recruited for this study. All participants were instructed to perform the five daily activities, including running, walking, descending and ascending stairs, sitting (working), and sleeping, by wearing the PPG module on their index fingers.

Raw PPG measurements obtained from all participants were downsampled, segmented, and finally rescaled. Subsequently, the pre-processed signal was utilized as the input representation for the 1D CNN model/DL architecture. The model classified the input data into five common daily activities by learning inherent features based on PPG signals.

Researchers selected the 1D CNN-based DL architecture in this HAR study due to the ability of CNNs to learn both global and local features from time-series data. The proposed model consisted of four max-pooling layers with two pooling sizes and 10 convolutional layers. The number of filters in the convolutional layers was 64, 64, 128, 128, 256, 256, 512, 512, 1024, and 1024.

Additionally, the kernel size was five for the first two convolutional layers and three for the rest of the convolutional layers, while the stride was one for every convolutional layer. A leaky rectified linear unit (Leaky ReLU) was employed as the activation function, and softmax activation was used at the output node.

Moreover, a global average pooling layer was utilized for converting the feature map obtained from the convolutional layers to a 1D vector, which passed through five fully connected layers with 5, 64, 128, 256, and 512 nodes. Then, this vector was softmax-activated to generate a prediction. To prevent over-fitting, a dropout was used after the pooling layers. Researchers strived to ensure the least performance drop while simplifying the model to realize the implementation of the proposed system in an embedded environment.

The model's performance was evaluated by performing cross-subject cross-validation (CV) to mitigate inflated results and ensure generalizability. F1 measure, recall, precision, and accuracy were used as metrics during the performance evaluation. Additionally, researchers also determined the optimal window size by investigating the model performance corresponding to the input PPG signal length.

Significance of the work

Experimental results demonstrated the feasibility of the proposed 1D CNN-based approach as high average test accuracy was achieved in cross-subject CV. The method successfully distinguished the five daily activities with 95.14% average test accuracy. Specifically, the test accuracies for the cross-subject CV were 95.1 ± 1.6%, showing minimal variance, indicating the proposed model's effectiveness as it consistently realized more than 92% accuracy across every test fold in cross-subject CV.

Additionally, researchers determined that the optimal window size corresponding to the input signal length after comprehensively evaluating the model performance was 10 seconds. Moreover, the model also displayed high precision, recall, and F-1 measure values.

However, all participants involved were healthy university students, which was a major limitation of this study and can produce biased results as the motor abilities of individuals differ based on their age group, and individuals with underlying health issues have different behavioral patterns compared to healthy people. To summarize, this study's findings effectively validated the potential of PPG-based practical HAR applications in several domains, such as fitness and healthcare.

Journal reference:
Samudrapom Dam

Written by

Samudrapom Dam

Samudrapom Dam is a freelance scientific and business writer based in Kolkata, India. He has been writing articles related to business and scientific topics for more than one and a half years. He has extensive experience in writing about advanced technologies, information technology, machinery, metals and metal products, clean technologies, finance and banking, automotive, household products, and the aerospace industry. He is passionate about the latest developments in advanced technologies, the ways these developments can be implemented in a real-world situation, and how these developments can positively impact common people.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Dam, Samudrapom. (2024, March 05). Feasibility of Human Activity Recognition Using PPG and 1D CNN. AZoAi. Retrieved on September 16, 2024 from https://www.azoai.com/news/20240305/Feasibility-of-Human-Activity-Recognition-Using-PPG-and-1D-CNN.aspx.

  • MLA

    Dam, Samudrapom. "Feasibility of Human Activity Recognition Using PPG and 1D CNN". AZoAi. 16 September 2024. <https://www.azoai.com/news/20240305/Feasibility-of-Human-Activity-Recognition-Using-PPG-and-1D-CNN.aspx>.

  • Chicago

    Dam, Samudrapom. "Feasibility of Human Activity Recognition Using PPG and 1D CNN". AZoAi. https://www.azoai.com/news/20240305/Feasibility-of-Human-Activity-Recognition-Using-PPG-and-1D-CNN.aspx. (accessed September 16, 2024).

  • Harvard

    Dam, Samudrapom. 2024. Feasibility of Human Activity Recognition Using PPG and 1D CNN. AZoAi, viewed 16 September 2024, https://www.azoai.com/news/20240305/Feasibility-of-Human-Activity-Recognition-Using-PPG-and-1D-CNN.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Computer Vision and Deep Learning Enhance Emotion Recognition