Comparative Analysis of Deep Learning Methods for Radar-Based Human Activity Recognition

In a paper published in the journal Applied Sciences, researchers underscored the growing importance of radar-based human activity recognition (HAR) in safety and surveillance. They highlighted its superiority over vision-based sensing in challenging conditions and noted increased acceptance due to privacy awareness and cost-effective manufacturing.

Study: Comparative Analysis of Deep Learning Methods for Radar-Based Human Activity Recognition. Image credit: Generated using DALL.E.3
Study: Comparative Analysis of Deep Learning Methods for Radar-Based Human Activity Recognition. Image credit: Generated using DALL.E.3

The study reviewed classical Machine Learning (ML) and Deep Learning (DL) approaches, reporting DL's advantage in avoiding manual feature extraction while acknowledging ML's robust empirical basis with lower computational demands. It presented state-of-the-art methods in each category, conducting a comparative study on benchmark datasets to evaluate performance and computational efficiency, aiming to establish a standardized assessment framework for these techniques.

Advances in Radar-Based HAR Technology

Over the past two decades, radar-based HAR has advanced significantly, driven by semiconductor innovations and radical changes in radar technology. It spans various domains, from security and healthcare to automotive safety and smart home solutions. It offers distinct advantages over vision-based systems in challenging conditions like poor illumination, occlusion, and weather. Unlike vision-based methods, radar-based systems are not affected by privacy-related concerns as they rely on microscale movements rather than explicit target shapes. Research has explored both classical ML and DL approaches for HAR, highlighting ML's reliance on shallow features and DL's capacity for generalized, long-term solutions.

Foundations of Radar-Based HAR

Radar-based sensing utilizes continuous-wave (CW) radar emitting constant waves and frequency-modulated CW (FMCW) radar employing chirp signals for human sensing, which is crucial for micro-Doppler signature analysis. Pulse radar, distinct from CW, captures non-moving target data via short pulses, while Ultra-Wideband (UWB) radar offers precise range but lower Signal-to-Noise Ratio (SNR). CW and pulse radar use preprocessing like clutter removal and denoising and techniques such as principal component analysis (PCA) for data optimization.

In ML for HAR, feature engineering is pivotal: classical methods need expert-crafted features, while DL automates selection, albeit with less interpretability. Challenges in HAR include diverse data scales, activity confusion, and transitions, alongside individual differences and class imbalances, amplifying complexity in classical ML pattern identification.

Diverse Approaches to Radar Classification

The review of methods for radar-based HAR covers various approaches, from Support Vector Machines (SVMs) to Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Stacked Autoencoders (SAEs), Convolutional Autoencoders (CAEs), and Transformers. Researchers extensively explored each method for its suitability in classifying human activities based on radar data. SVMs, initially developed in the 1990s, employ hyperplanes using the kernel trick to maximize the margin between points and hyperplanes, aiding in classification tasks.

Despite their robustness, SVMs faced challenges distinguishing activities with similar micro-Doppler signatures, limiting their accuracy. CNNs, introduced in the 1980s, emerged as robust architectures for HAR, leveraging convolutional and pooling layers for feature extraction from radar data. They proved effective, achieving high accuracies, even when dealing with complex datasets and multiple activities. RNNs, specifically Long Short-Term Memory (LSTM) networks, excelled in memorizing temporal sequences but faced constraints in handling longer sequences and required higher memory bandwidth, impacting their practical application.

SAEs and CAEs played vital roles in feature extraction and dimensionality reduction. With multiple hidden layers, SAEs helped compress input data into latent representations, while CAEs, utilizing CNNs, retained spatial information and proved advantageous in specific radar-based classification tasks.

Lastly, Transformers, introduced in 2017, offered parallel processing capabilities, making them adept at capturing long-term relationships within sequences. Their application in radar-based HAR showcased impressive accuracy in classifying diverse activities, with efforts made to create more lightweight Transformer models for practical use. These varied methodologies have demonstrated their strengths and limitations in organizing human activities using radar data, each offering unique advantages in handling different complexities and types of datasets.

Evaluating Radar-Based Activity Recognition Models

The radar-based HAR model evaluation utilized standard machine learning metrics like accuracy, recall, precision, and F1 score. Additionally, it incorporated macro-averaged Matthew Correlation Coefficient (MCC) and Cohen Kappa for a comprehensive assessment of model performance. The CNN-based classification demonstrated moderate learning, while RNN-based models showed varied performances—LSTM and Bi-LSTM exhibited different convergence rates and confusion matrix results.

At the same time, the GRU network hinted at overfitting. CAE and CNN showed robustness to color variations, while RNN-based methods were more affected. Compression ratios impacted GRU significantly, while other models maintained performance at reduced data sizes. Overall, CNNs outperformed RNNs due to their spatial feature extraction ability, highlighting CNNs' suitability for this task. Coarse-grained activities improved classification, indicating distinct radar signal properties in larger-scale movements.

All models yielded acceptable results, with varying levels of performance. CNNs proved more suitable due to their ability to capture spatial features, contrasting RNNs' limitations in handling image-based inputs. Misclassifications were highest for drinking and picking up objects across all models. While CNN and LSTM learning curves differed, their metrics were similar. RNNs exhibited higher variation and overfitting tendencies, impacting overall performance compared to reported results. Further refinements in hyperparameter tuning or preprocessing techniques tailored to specific activity patterns could enhance model performance in radar-based HAR.

Summary

To summarize, the paper compared DL methods for radar-based HAR using a shared dataset to assess performance and computational efficiency. Beyond analyzing various metrics and execution times, it proposed enhancing models and refining samples. Additionally, exploring alternative DL methods like autoencoders, Generative Adversarial Networks (GANs), or their combinations might introduce new criteria for selecting the most suitable approach.

Journal reference:
Silpaja Chandrasekar

Written by

Silpaja Chandrasekar

Dr. Silpaja Chandrasekar has a Ph.D. in Computer Science from Anna University, Chennai. Her research expertise lies in analyzing traffic parameters under challenging environmental conditions. Additionally, she has gained valuable exposure to diverse research areas, such as detection, tracking, classification, medical image analysis, cancer cell detection, chemistry, and Hamiltonian walks.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Chandrasekar, Silpaja. (2023, November 29). Comparative Analysis of Deep Learning Methods for Radar-Based Human Activity Recognition. AZoAi. Retrieved on November 21, 2024 from https://www.azoai.com/news/20231129/Comparative-Analysis-of-Deep-Learning-Methods-for-Radar-Based-Human-Activity-Recognition.aspx.

  • MLA

    Chandrasekar, Silpaja. "Comparative Analysis of Deep Learning Methods for Radar-Based Human Activity Recognition". AZoAi. 21 November 2024. <https://www.azoai.com/news/20231129/Comparative-Analysis-of-Deep-Learning-Methods-for-Radar-Based-Human-Activity-Recognition.aspx>.

  • Chicago

    Chandrasekar, Silpaja. "Comparative Analysis of Deep Learning Methods for Radar-Based Human Activity Recognition". AZoAi. https://www.azoai.com/news/20231129/Comparative-Analysis-of-Deep-Learning-Methods-for-Radar-Based-Human-Activity-Recognition.aspx. (accessed November 21, 2024).

  • Harvard

    Chandrasekar, Silpaja. 2023. Comparative Analysis of Deep Learning Methods for Radar-Based Human Activity Recognition. AZoAi, viewed 21 November 2024, https://www.azoai.com/news/20231129/Comparative-Analysis-of-Deep-Learning-Methods-for-Radar-Based-Human-Activity-Recognition.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Revolutionizing Gemstone Analysis with Deep Learning