Enhancing MRI Safety: Deep Learning for Motion Artifact Detection

In a paper published in the journal Medical Physics, researchers introduced a deep learning method to detect and measure motion artifacts in undersampled brain magnetic resonance imaging (MRI) scans. The approach utilizes synthetic motion-corrupted data to train a convolutional neural network (CNN), enabling accurate evaluation of real-world scans.

Study: Enhancing MRI Safety: Deep Learning for Motion Artifact Detection. Image credit: nimon/Shutterstock
Study: Enhancing MRI Safety: Deep Learning for Motion Artifact Detection. Image credit: nimon/Shutterstock

The researchers used this motion artifact estimator to select between a specialized motion-robust model when detecting significant motion and a high data consistency model otherwise. This method is a potential safety measure for artificial intelligence (AI)-based approaches, offering insights for reconstruction methods and aiding in timely decision-making during scanning.

Related Work

MRI is essential for detailed anatomical imaging without radiation, but its lengthy acquisition requires stillness, leading to motion-induced artifacts even with slight movements. Techniques like parallel imaging and compressed sensing aim to speed it up, yet motion remains challenging, causing misleading reconstructions. Real-time motion detection without complex hardware is crucial.

Techniques like parallel imaging and compressed sensing aim to expedite MRI by capturing partial k-space data and employing reconstruction methods. Yet, despite advancements like AI-driven reconstructions, motion remains a challenge, potentially leading to misleading reconstructions known as "hallucinations."

Robust MRI Reconstruction Amid Motion

In the framework, a motion corruptor generates training pairs of motion-corrupted images and corresponding motion-free ground truth from motion-free datasets. During training and inference, either an undersampled motion-corrupted or motion-free image is provided to networks. The framework comprises a motion artifact regressor and two reconstruction networks: a motion-robust and a high data consistency network. These networks reconstruct motion-corrupted or motion-free data, guided by the severity of motion artifacts estimated by the regressor.

The motion artifact regression network employs a CNN architecture to estimate motion artifact severity by quantifying pixel-wise differences between undersampled motion-corrupted and motion-free images. The design of this network includes seven blocks, each integrating convolutional layers and rectified linear unit (ReLU) activation, totaling 9M parameters. Researchers made a comparison against a visual geometry group19 (VGG19) model with pre-trained weights (IMAGENET1K_V1) using 144M parameters. An ensemble of 21 networks, trained with different seeds, is utilized to improve prediction consistency.

For reconstruction, a selector framework uses the regressor to determine the level of motion artifacts in the data. When the system detects substantial motion, it uses a motion-robust approach; otherwise, it employs a regular reconstruction method. Both reconstruction models use a modified adaptive-compressed sensing (CS)-network architecture, enforcing data consistency between measured and reconstructed k-space.

To simulate motion-corrupted MRI acquisitions, 3D rigid motion patterns are applied to still images over multiple timesteps, simulating subject motion. Researchers generate these motion-corrupted k-space datasets by applying 524 rigid-body head motion patterns and multiplying them by random factors between zero and two. This process provides diverse training images with varying degrees of motion. Cartesian and Poisson disk undersampling masks for retrospective k-space undersampling are utilized, which is crucial for feeding the regressor and reconstruction networks.

The approach offers a comprehensive framework encompassing motion artifact estimation, reconstruction selection, and motion synthesis, leveraging diverse datasets and undersampling techniques for robust MRI reconstruction in motion artifacts.

MRI Motion Artifact Study

The experiments and results encompassed various facets of the motion artifact regression models and reconstruction techniques. Researchers trained the regression models by conducting 9 × 10^3 iterations on an NVIDIA Quadro ray tracing extension (RTX) 6000 graphics processing unit (GPU). This training involved selecting retrospective undersampling factors and central fractions, essential for the models to learn motion artifact estimation across varied acceleration levels.

Data from the New York University (NYU) FastMRI brain dataset, including T1, T2, and fluid-attenuated inversion recovery (FLAIR) scans, were used for training and validation, amounting to 4267 uncorrupted scans for training and 1304 for validation.

Additionally, prospectively motion-corrupted data from the MR-ART dataset was employed. This dataset featured 148 volunteers undergoing different motion tasks, enabling an evaluation of the model's performance on real-time motion-corrupted scans. The model demonstrated an accuracy of 93.1% in differentiating between still and motion-affected scans based on predicted motion amount, highlighting its efficacy in identifying motion artifacts.

Furthermore, the reconstruction models underwent rigorous training and fine-tuning using the FastMRI dataset. Researchers evaluated the models on motion and still data, revealing varying performances. The motion-robust reconstruction approach performed better on motion-corrupted data but slightly decreased quality on still images compared to the conventional reconstruction method.

The study also introduced a motion-adaptive reconstruction framework that leveraged the regression model's predictions to select between motion-robust and conventional reconstruction models based on the severity of motion artifacts detected. This framework closely approached optimal performance, nearly matching the individual models' performances on still and motion-corrupted data.

Finally, an investigation into different sampling schemes revealed insights into the predictability of undersampling artifacts and their impact on motion artifact identification. This comprehensive analysis covered training specifics, data sources, model evaluations, and the development of a motion-adaptive reconstruction framework, all geared towards enhancing MRI reconstruction quality in the presence of motion artifacts.

Conclusion

In summary, this study pioneers a deep learning-based motion artifact estimator for MRI scans, achieving over 93% accuracy in differentiating motion from undersampling artifacts. Notably, it identifies motion early during acquisition, offering real-time alerts. Robust evaluations on diverse datasets demonstrate its adaptability to simulated and real-world motion-corrupted scans, showcasing superior performance compared to existing models.

Insights into sampling strategies and a motion-adaptive reconstruction framework highlight potential improvements in artifact identification and reconstruction quality. Challenges in data availability pose future research opportunities, while this innovation promises enhanced safety mechanisms and improved MRI imaging quality in the presence of motion artifacts.

Journal reference:
Silpaja Chandrasekar

Written by

Silpaja Chandrasekar

Dr. Silpaja Chandrasekar has a Ph.D. in Computer Science from Anna University, Chennai. Her research expertise lies in analyzing traffic parameters under challenging environmental conditions. Additionally, she has gained valuable exposure to diverse research areas, such as detection, tracking, classification, medical image analysis, cancer cell detection, chemistry, and Hamiltonian walks.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Chandrasekar, Silpaja. (2024, January 08). Enhancing MRI Safety: Deep Learning for Motion Artifact Detection. AZoAi. Retrieved on September 16, 2024 from https://www.azoai.com/news/20240108/Enhancing-MRI-Safety-Deep-Learning-for-Motion-Artifact-Detection.aspx.

  • MLA

    Chandrasekar, Silpaja. "Enhancing MRI Safety: Deep Learning for Motion Artifact Detection". AZoAi. 16 September 2024. <https://www.azoai.com/news/20240108/Enhancing-MRI-Safety-Deep-Learning-for-Motion-Artifact-Detection.aspx>.

  • Chicago

    Chandrasekar, Silpaja. "Enhancing MRI Safety: Deep Learning for Motion Artifact Detection". AZoAi. https://www.azoai.com/news/20240108/Enhancing-MRI-Safety-Deep-Learning-for-Motion-Artifact-Detection.aspx. (accessed September 16, 2024).

  • Harvard

    Chandrasekar, Silpaja. 2024. Enhancing MRI Safety: Deep Learning for Motion Artifact Detection. AZoAi, viewed 16 September 2024, https://www.azoai.com/news/20240108/Enhancing-MRI-Safety-Deep-Learning-for-Motion-Artifact-Detection.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Revolutionizing Gemstone Analysis with Deep Learning