Automated Lameness Detection in Sows

In a paper published in the journal Scientific Reports, researchers addressed the challenge of detecting early-stage lameness in sows by developing an automated, non-invasive system using computer vision. They created a repository of videos showing sows with varying locomotion scores, assessed by experts using the Zinpro corporation's locomotion score system.

Study: Automated Lameness Detection in Sows. Image Credit: ccpixx photography/Shutterstock.com
Study: Automated Lameness Detection in Sows. Image Credit: ccpixx photography/Shutterstock.com

Using stereo cameras to record 2D videos, the researchers trained computational models with the social leap estimates animal poses (SLEAP) framework to track key points on the sows' bodies. The top-performing models accurately tracked key points from both lateral and dorsal views. This approach offers a precision livestock farming (PLF) tool to assess sow locomotion, potentially enhancing animal welfare objectively.

Background

Past work has highlighted the significant impact of lameness on animal welfare and the limitations of traditional observational methods in detecting early-stage lameness in sows. PLF technologies, including sensors and cameras, offer objective and efficient solutions for monitoring animal health. A key challenge in detecting early-stage lameness is the reliance on subjective and time-consuming observational methods.

Study Overview

Data were collected from a commercial pig farm in Jaguariaíva, Paraná, Brazil, with ethical approval from the Ethics Committee on Animal Use (CEUA) of the Faculty of Veterinary Medicine and Animal Science at the University of São Paulo (USP). The Robotics and Automation Group for Biosystems Engineering (RAEB) at the USP supported the creation of vision computer models.

A sample of 500 sows was individually recorded in 2D videos to create a video image repository. The filming setup was constructed within a vacant pen, using a solid floor corridor and white-painted walls to enhance contrast.

The team recorded 1,207 videos but discarded 40% due to filming issues and converted the remaining 364 lateral and 336 dorsal videos to MP4 format. Thirteen experts evaluated the lateral videos using the Zinpro swine locomotion scoring system, assigning scores from 0 to 3.

Scores with more than 50% agreement among experts were considered final, and the etan removed the outlier experts based on statistical analysis. Only videos with corresponding lateral and dorsal views were used in the SLEAP software, with 106 videos used to train models using DL architectures like LEAP, U-shaped network (U-Net), residual network with 50 layers (ResNet-50), ResNet-101, and ResNet-152.

Skeletons with different key points were defined for pose estimation, and analysts manually labeled the frames for training. To ensure accurate pose estimation, they evaluated the models using object key point similarity, mean average precision (mAP), and distance average (dist.avg).

Researchers analyzed pixel errors for labeled reference points to refine the model's accuracy. The key points were defined to identify and analyze movements for future kinematic studies and to relate these movements to the sow's locomotion score, determined by the panel of observers.

The snout and neck key points were chosen to identify compensatory head movements. The team selected the neck dorsal, tail dorsal, neck, and tail key points to identify spine arching and the hocks, hooves, and metacarpals key points to determine which limb the sow had difficulty walking on. The SLEAP software facilitated the development of nineteen models with varying key point skeletons and five convolutional neural network (CNN) architectures by customizing hyperparameters and using an 85%/15% split method for training and testing.

Model Performance

A dataset consisting of 281 lateral and 237 dorsal videos was created, separated by sow locomotion scores, and evaluated with 75% and 100% confidence. Technical issues and evaluator agreement excluded 20.67% of lateral and 29.46% of dorsal videos with less than 75% assessment confidence. From this dataset, 106 video pairs were selected for model development. The video repository, accessible on the Animal Welfare Science Hub website, includes evaluated videos but not those with labeled key points.

Simulations of hyperparameter changes revealed that models using LEAP and U-Net architectures outperformed those using ResNet architectures, with 6 and 7-keypoint skeletons showing the best performance. Precision was assessed using object key point similarity (OKS), mAP, dist. avg, powder characterization and knowledge (PCK) metrics, and pixel error analysis for key points. The results demonstrate successful model development for identifying lateral and dorsal pose sequences in sows, with ongoing research focusing on studying kinematic reports to relate pose detections to locomotion scores.

Conclusion

In summary, computational models demonstrated the ability to identify and estimate locomotion poses in sows automatically. The results are OKS, mAP, and PCK while maintaining a low average distance between ground truth and predictions. These models aimed to serve as tools and methods for PLF, helping to objectively detect early-stage lameness and enable prompt interventions, thereby enhancing animal welfare. The repository of 2D video images with various locomotion scores was instrumental in this development and remains available to support further research and educational activities related to sow locomotion.

Journal reference:
Silpaja Chandrasekar

Written by

Silpaja Chandrasekar

Dr. Silpaja Chandrasekar has a Ph.D. in Computer Science from Anna University, Chennai. Her research expertise lies in analyzing traffic parameters under challenging environmental conditions. Additionally, she has gained valuable exposure to diverse research areas, such as detection, tracking, classification, medical image analysis, cancer cell detection, chemistry, and Hamiltonian walks.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Chandrasekar, Silpaja. (2024, July 23). Automated Lameness Detection in Sows. AZoAi. Retrieved on November 21, 2024 from https://www.azoai.com/news/20240723/Automated-Lameness-Detection-in-Sows.aspx.

  • MLA

    Chandrasekar, Silpaja. "Automated Lameness Detection in Sows". AZoAi. 21 November 2024. <https://www.azoai.com/news/20240723/Automated-Lameness-Detection-in-Sows.aspx>.

  • Chicago

    Chandrasekar, Silpaja. "Automated Lameness Detection in Sows". AZoAi. https://www.azoai.com/news/20240723/Automated-Lameness-Detection-in-Sows.aspx. (accessed November 21, 2024).

  • Harvard

    Chandrasekar, Silpaja. 2024. Automated Lameness Detection in Sows. AZoAi, viewed 21 November 2024, https://www.azoai.com/news/20240723/Automated-Lameness-Detection-in-Sows.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Researchers Supercharge Depth Estimation Models, Achieving 200x Faster Results with New Fix