Deep Learning for Sheep Loin CT Image Segmentation

An article published in the journal PLOS ONE discussed the challenges of segmenting sheep Loin Computed Tomography (CT) images due to the lack of clear boundaries between internal tissues.

Study: Deep Learning for Sheep Loin CT Image Segmentation. Image credit: Generated using DALL.E.3
Study: Deep Learning for Sheep Loin CT Image Segmentation. Image credit: Generated using DALL.E.3

Traditional image segmentation methods are needed to meet the requirements in this context. To address this, the researchers explored the use of deep learning models. They applied the Fully Convolutional Neural Network (FCN) and five different UNet models to a dataset of 1471 CT images from the Loin part of 25 Australian White rams and Dolper rams, using a 5-fold cross-validation approach. After performing 10 independent runs, researchers used various evaluation metrics to assess the model performances. All models demonstrated excellent results according to the evaluation metrics.

Background

CT is a non-invasive medical imaging technique that employs X-radiation (X-ray) beams to produce cross-sectional images of objects, allowing for precise disease diagnosis. In CT images, the Hounsfield Units (HU) values represent the correlation between animal tissues and X-rays, and these images are essential for diagnosing diseases accurately. Image segmentation, the process of isolating areas of interest within CT images, plays a pivotal role in disease diagnosis and phenotype measurement.

However, the complex and overlapping nature of internal organs often makes traditional segmentation methods ineffective. Researchers have actively harnessed the power of deep learning algorithms in image analysis, employing them extensively in medical imaging, agriculture, and animal health for critical tasks like disease diagnosis and carcass analysis. Recent advances in deep learning have significantly improved image segmentation, particularly in the medical field.

Data Preprocessing

This study encompassed the scanning of 25 rams, resulting in 4,508 CT images, each sized at 512×512 pixels. After a meticulous selection, researchers extracted 1,471 images that included the Loin region to create the experimental dataset. They used Micro Digital Imaging and Communications in Medicine (MicroDicom) open-source software to convert DICOM sequence CT image data into JSON files and transform them into Joint Photographic Experts Group (JPG) image files. 

The authors utilized the Simple Insight Segmentation and Registration Toolkit (ITK)  for converting the Hounsfield Units (HU) values of the images to describe and analyze image features. Additionally, they actively applied windowing operations and histogram equalization to enhance the contrast between the segmented area and the image background. 

The primary challenges in this dataset encompassed variations in the region's size and the absence of a clear boundary between the Loin region and the background in CT images. In this approach, they actively conducted expert annotation through manual labeling using semi-automated segmentation tools within the Python-based open-source software library called LabelMe. These annotations included binary masks of the CT bed and all internal organ tissues, distinguishing between them based on characteristics and semantic information. The Ground Truth was represented with black pixels denoting the background and white pixels signifying the Loin region.

Data Collection

Researchers collected the dataset from Tianjin Aoqun Animal Husbandry Company., Limited (Co., LTD), China's premier meat sheep breeding farm. The CT imaging, an integral part of this study, occurred at the Production Performance Testing Center of Meat Sheep, located within the same facility. The research, conducted in September 2021, involved 25 healthy rams, including Australian White Rams and Dolper Rams, aged between four and six months.

The scanning was performed using the NeuViz 16 Classic scanner system from Neusoft Group Co., LTD (China), configured at 120 kV/100 mA, with a 512×512 matrix and a 5.00 mm axial thickness. To ensure the animals' safety and comfort, they were administered a 1.0-milliliter intravenous injection of chlorpromazine hydrochloride for full-body anesthesia before being placed in a prone position on the CT scanner platform. Protective measures, such as hooves positioning and eye coverings, were taken to prevent any harm during scanning. Following the scanning procedure, the research team relocated the animals to a designated recovery area.

Experimental Results

The study employed deep learning models to segment CT images of sheep Loin, a crucial region for livestock and poultry breeding, despite the challenge of ambiguous boundaries between organs and tissues in CT scans. Six state-of-the-art deep learning models, including FCN8s, UNet, Attention-UNet, Channel-UNet, ResNet34-UNet, and UNet++, were evaluated for their performance in sheep Loin CT image segmentation. The research utilized a comprehensive set of evaluation metrics, including LOSS, Average Hausdorff distance (AVER_HD), Mean Intersection over Union (MIOU), and Sørensen–Dice coefficient (DICE), to assess the effectiveness of these models.

The study results revealed that Attention-UNet outperformed the other models regarding pixel accuracy, achieving an accuracy of 0.999±0.009. UNet++ also exhibited strong performance, with an accuracy of 0.998±0.015. Regarding running time, ResNet34-UNet proved the most efficient, with a running time of 22.078±0.368 hours. The evaluation metrics, including AVER_HD, MIOU, DICE, and LOSS, further demonstrated the effectiveness of the models, with Attention-UNet consistently leading in these measurements. Overall, the research highlighted the potential of deep learning models for accurate sheep Loin CT image segmentation, a vital aspect of livestock trait measurement.

Conclusion

To summarize, applying six deep learning models for Loin CT image segmentation in meat sheep has emerged as a pivotal step in estimating Loin volume and aiding in breeding program selection. In a comprehensive evaluation using various metrics, Attention-UNet consistently outperformed other methods, achieving exceptional results in Pixel Accuracy, AVER_HD, MIOU, and DICE, with scores of 0.999±0.009, 4.591±0.338, 0.90±0.012, and 0.95±0.007, respectively.

These findings highlight the remarkable capabilities of Attention-UNet in precisely segmenting Loin areas within CT images. This advancement holds significant promise for enhancing livestock breeding programs and refining the accuracy of phenotypic trait measurement in living sheep.

Journal reference:
Silpaja Chandrasekar

Written by

Silpaja Chandrasekar

Dr. Silpaja Chandrasekar has a Ph.D. in Computer Science from Anna University, Chennai. Her research expertise lies in analyzing traffic parameters under challenging environmental conditions. Additionally, she has gained valuable exposure to diverse research areas, such as detection, tracking, classification, medical image analysis, cancer cell detection, chemistry, and Hamiltonian walks.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Chandrasekar, Silpaja. (2023, November 05). Deep Learning for Sheep Loin CT Image Segmentation. AZoAi. Retrieved on December 22, 2024 from https://www.azoai.com/news/20231105/Deep-Learning-for-Sheep-Loin-CT-Image-Segmentation.aspx.

  • MLA

    Chandrasekar, Silpaja. "Deep Learning for Sheep Loin CT Image Segmentation". AZoAi. 22 December 2024. <https://www.azoai.com/news/20231105/Deep-Learning-for-Sheep-Loin-CT-Image-Segmentation.aspx>.

  • Chicago

    Chandrasekar, Silpaja. "Deep Learning for Sheep Loin CT Image Segmentation". AZoAi. https://www.azoai.com/news/20231105/Deep-Learning-for-Sheep-Loin-CT-Image-Segmentation.aspx. (accessed December 22, 2024).

  • Harvard

    Chandrasekar, Silpaja. 2023. Deep Learning for Sheep Loin CT Image Segmentation. AZoAi, viewed 22 December 2024, https://www.azoai.com/news/20231105/Deep-Learning-for-Sheep-Loin-CT-Image-Segmentation.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Deep Learning Boosts Renewable Energy Forecasting