AI-Based Elderly and Visually Impaired Human Activity Monitoring

In a paper published in the journal Scientific Reports, researchers introduced the Elderly and Visually Impaired Human Activity Monitoring (EV-HAM) system, designed to aid the elderly and visually impaired by monitoring routines and intervening in crises. This system is a cornerstone for human-centric applications in a tech-driven era, leveraging advancements like artificial intelligence (AI), digital twins, and Wi-Sense—a groundbreaking environment-independent human activity identification system using Wi-Fi Channel State Information (Wi-Fi CSI) data.

Study: Transforming Human-Centric Care: AI-Based Elderly and Visually Impaired Human Activity Monitoring. Image credit: metamorworks/Shutterstock
Study: Transforming Human-Centric Care: AI-Based Elderly and Visually Impaired Human Activity Monitoring. Image credit: metamorworks/Shutterstock

Wi-Sense employs Deep Hybrid Convolutional Neural Networks (DHCNN), CSI ratio adjustments, and t-distributed Stochastic Neighbor Embedding (t-SNE) to accurately detect micro-Doppler fingerprints of activities, achieving an impressive 99% accuracy in identifying actions.

Related Work

Previous research in human activity recognition (HAR) systems has focused on analyzing sensor and camera data to anticipate risky situations by uncovering user patterns and employing deep learning methods to extract features for accurate activity classification. Wearable sensors and smartphones have effectively detected diverse activities like walking, running, and climbing, surpassing traditional classifiers.

Additionally, advancements in wheelchair sensors and Internet of Things (IoT)-based security responses have highlighted the significance of HAR in healthcare and safety monitoring. These studies emphasize the potential of HAR systems to enhance healthcare and safety measures through predictive analysis and efficient tracking of human activities.

Advanced Methodology for Activity Monitoring

The proposed methodology for EV-HAM comprises an intricate system involving diverse sensors capturing raw inputs like acoustic data from smartphones, Wi-Fi, watches, Bluetooth, and other sources. Researchers utilized these readings to derive attributes, such as average, range, and intensity, feeding into a Pattern Recognition (PR) model critical for HAR. Wearable devices play a pivotal role in capturing physical activities like rest, movement, lying down, climbing, running, and jumping, providing valuable insights into daily routines and health status. These sensors, such as those found in Fitbit, offer data on energy expenditure, aiding in chronic illness prevention and facilitating healthcare monitoring.

The integration of wearable technology spans from smartphones to E-tattoos, designed to cover individuals from head to toe, made possible by advancements in micro-electro-mechanical systems. HAR systems aim to enhance user-computer interaction by comprehending human behavior within defined activity sets. A crucial aspect involves processing sensor readings through deep learning models to identify and categorize activities accurately.

The Wi-Fi sensing module focuses on gleaning time-dependent channel characteristics affected by human actions, analyzing RF signals to understand changes over time due to environmental and human activity influences. Researchers employed CSI, Channel Impulse Response (CIR), and spectrogram approaches to extract micro-Doppler signals, categorizing user behavior through a Hybrid CNN model.

The DHCNN architecture amalgamates LSTM, CNN, and attention models. It provides a comprehensive framework by employing an attention mechanism for sequence predictions, CNN layers for feature extraction, and LSTM layers to capture temporal characteristics. A detailed algorithm delineates the steps, encompassing dataset gathering, preprocessing, model creation, training, evaluation, and deployment. The ethical considerations stress that no human subjects or animals were involved in the experimentation process.

This comprehensive methodology showcases the integration of diverse sensor inputs, data processing through deep learning models, and the deployment of a sophisticated neural network architecture for accurate HAR, crucial for aiding the elderly and visually impaired in monitoring and maintaining their daily activities and health.

Enhanced HAR Methodology: Evaluation Insights

The experimental setup for HAR involved utilizing sci-kit-learn and Keras libraries with a TensorFlow backend for training classification models. An 80-20 split divided the data into training and testing sets. The datasets used to construct and train the model leveraged changes in wireless signals' reflectors due to body movement, impacting CSI. Disregarding two publicly available datasets occurred due to their outdated hardware and inability to capture a realistic range of human behavior and real-world data.

Upon data cleaning, which encompassed eliminating duplicates and achieving an equitable distribution of activities, researchers employed dimensionality reduction techniques like Principal Component Analysis (PCA) and t-SNE. PCA served to diminish feature dimensionality by generating linear attribute combinations, while t-SNE, a non-linear method, handled high-dimensional feature datasets. Researchers conducted a hyper-parameter search across frameworks to optimize the proposed Hybrid CNN model using the Adam optimizer.

The evaluation of the improved DHCNN model included precision, accuracy, recall, and F-measure metrics. Precision and recall were vital for understanding the system's ability to identify specific classes. At the same time, F-measure provided insights into sample reliability; researchers tabulated performance measures of the EV-HAM method, and graphical representations showcased model evaluation. The confusion matrix highlighted the DHCNN classifier's overall accuracy of 99% and individual activity identification rates.

Receiver operating characteristic (ROC) curves aided in comprehending the model's classification metrics, portraying its actual positive rate compared to the false positive rate. The curves indicated the classifier's effectiveness, with exceptional individual classification accuracy for some activities.

The suggested model's ability to learn spatial and temporal information contributed to its superior performance, learning at an accelerated rate despite its complex structure. However, such models demand labeled data and are well-suited for complex HAR scenarios requiring sensor fusion. Deep learning approaches like LSTM offer high accuracy but require considerable computing resources. Conversely, RNNs operate faster but with less reliability compared to LSTMs.

Conclusion

To sum up, the EV-HAM system is a groundbreaking approach in healthcare, emphasizing HAR using IoT and deep learning methods. Wi-Fi wearable sensors enable continuous monitoring, offering insights into daily activities and irregular patterns. The system optimizes data acquisition, achieving 99% accuracy in activity identification and outperforming existing methods. Its adaptable architecture hints at future integrations for refined elderly monitoring within the frameworks.

Journal reference:
Silpaja Chandrasekar

Written by

Silpaja Chandrasekar

Dr. Silpaja Chandrasekar has a Ph.D. in Computer Science from Anna University, Chennai. Her research expertise lies in analyzing traffic parameters under challenging environmental conditions. Additionally, she has gained valuable exposure to diverse research areas, such as detection, tracking, classification, medical image analysis, cancer cell detection, chemistry, and Hamiltonian walks.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Chandrasekar, Silpaja. (2023, December 20). AI-Based Elderly and Visually Impaired Human Activity Monitoring. AZoAi. Retrieved on November 21, 2024 from https://www.azoai.com/news/20231220/AI-Based-Elderly-and-Visually-Impaired-Human-Activity-Monitoring.aspx.

  • MLA

    Chandrasekar, Silpaja. "AI-Based Elderly and Visually Impaired Human Activity Monitoring". AZoAi. 21 November 2024. <https://www.azoai.com/news/20231220/AI-Based-Elderly-and-Visually-Impaired-Human-Activity-Monitoring.aspx>.

  • Chicago

    Chandrasekar, Silpaja. "AI-Based Elderly and Visually Impaired Human Activity Monitoring". AZoAi. https://www.azoai.com/news/20231220/AI-Based-Elderly-and-Visually-Impaired-Human-Activity-Monitoring.aspx. (accessed November 21, 2024).

  • Harvard

    Chandrasekar, Silpaja. 2023. AI-Based Elderly and Visually Impaired Human Activity Monitoring. AZoAi, viewed 21 November 2024, https://www.azoai.com/news/20231220/AI-Based-Elderly-and-Visually-Impaired-Human-Activity-Monitoring.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Deep Learning Advances Deep-Sea Biota Identification in the Great Barrier Reef