Efficient Dairy Cow Behavior Recognition: A Two-Pathway X3DFast Model

In an article recently published in the journal Scientific Reports, researchers proposed a two-pathway X3DFast model for dairy cow behavior recognition and investigated the feasibility of using the model in real-world settings.

Study: Efficient Dairy Cow Behavior Recognition: A Two-Pathway X3DFast Model. Image credit: Guitar photographer/Shutterstock
Study: Efficient Dairy Cow Behavior Recognition: A Two-Pathway X3DFast Model. Image credit: Guitar photographer/Shutterstock

Background

In dairy cows, behavior is one of the crucial factors that indicate their health status as cows display various behavioral characteristics when they encounter health issues. Thus, the identification of dairy cow behavior can assist in assessing their disease treatment and physiological health and improve cow welfare, which is critical for developing animal husbandry.

The conventional approach based on extensive human observation of dairy cow behavior has several disadvantages, including high fatigue rates, labor intensity, and labor costs, which necessitated the development of more effective technical approaches to accurately and quickly identify cow behaviors and improve the dairy cow farming intelligence level.

Automatic dairy cow behavior recognition technology can effectively diagnose dairy cow diseases, improve farm economic benefits, and reduce animal elimination rates. In recent years, deep learning (DL) has gained significant attention for automatically identifying dairy cow behavior.

However, dairy cow behaviors are primarily characterized by multiscale features in complex farming environments owing to long data collection distances and large scenes. Conventional behavior recognition models cannot precisely identify similar dairy cow behavior features/features with similar visual characteristics.

Three-dimensional (3D) convolution-based behavior recognition methods can address the issue of small visual feature differences. However, the method is ineffective for real-time dairy cow behavior recognition in complex breeding environments due to simple data background, long inference time, and many model parameters.

The proposed X3DFast model

In this study, researchers proposed a spatiotemporal behavior feature-based X3DFast dairy cow behavior recognition model with a two-pathway architecture. The efficient and lightweight two-pathway X3DFast architecture was designed specifically for accurate and fast dairy cow behavior feature learning from dairy cow behavior videos/video data.

The objective of the study was to develop a model that can effectively recognize behavioral motion patterns of dairy cows in real-world agricultural environments. Researchers created dairy cow behavior video datasets with varying clip durations, camera angles, video quality, and occlusion to improve model robustness.

They focused on four common dairy cow behaviors, including mounting, lying, walking, and standing. Videos of dairy cow behaviors were recorded using surveillance cameras at a dairy farm containing more than 100 cows to create a complex background dataset/dataset of dairy cow behaviors that reflected a real-world farming setting to build an effective behavior recognition model.

Overall, the dataset contained 774 videos of mounting behavior, 1130 videos of walking behavior, 1947 videos of lying behavior, and 1799 videos of standing behavior. Researchers employed stratified sampling to split the test and training sets to ensure that the proposed model could learn features of various dairy cow behaviors sufficiently.

They randomly selected 80% of the videos to form the training set and the rest 20% to form the validation set for videos of every dairy cow’s behaviors. The X3DFast model was built by combining the X3D and SlowFast models, with the model architecture consisting of four key components, including the predictor, the lateral connection, the fast pathway, and the X3D pathway.

The fast and X3D pathways were connected laterally to integrate temporal and spatial features. The X3D pathway extracted the spatial features, while the fast pathway with R(2 + 1)D convolution decomposed the spatiotemporal features and transferred the relevant spatial features to the X3D pathway.

Moreover, the X3DFast was further improved to enhance the model accuracy for recognizing dairy cow behavior based on different times required for behavior expression, similar foreground backgrounds, and features of different densities in dairy cow behavior data.

Experimental evaluation and findings

All experiments were performed using six 16 GB TeslaP100 GPUs with CUDA10.1 and PyTorch1.1.7. Researchers initially trained the proposed model using the dairy cow behavior video data and then compared the performance of the trained X3DFast model in dairy cow behavior recognition with the performance of existing classic models, including single pathway models such as I3D, C3D, TSM, TSN, SlowOnly, and X3D, and the two-pathway model SlowFast.

The environment-applicable multiangle X3DFast model attained an accuracy of over 97% for all individual dairy cow behaviors and a top-1 index/accuracy of 98.49%, which was the highest among all single pathway models and higher than two-pathway SlowFast, in recognizing four dairy cow behaviors that were investigated in this study, including walking, standing, lying, and mounting.

Moreover, the findings also demonstrated that the X3DFast dairy cow behavior recognition model possessed higher advantages in inference speed and model size, and it can effectively identify cow behaviors in different perspectives and lighting conditions.

Journal reference:
Samudrapom Dam

Written by

Samudrapom Dam

Samudrapom Dam is a freelance scientific and business writer based in Kolkata, India. He has been writing articles related to business and scientific topics for more than one and a half years. He has extensive experience in writing about advanced technologies, information technology, machinery, metals and metal products, clean technologies, finance and banking, automotive, household products, and the aerospace industry. He is passionate about the latest developments in advanced technologies, the ways these developments can be implemented in a real-world situation, and how these developments can positively impact common people.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Dam, Samudrapom. (2023, November 24). Efficient Dairy Cow Behavior Recognition: A Two-Pathway X3DFast Model. AZoAi. Retrieved on December 22, 2024 from https://www.azoai.com/news/20231124/Efficient-Dairy-Cow-Behavior-Recognition-A-Two-Pathway-X3DFast-Model.aspx.

  • MLA

    Dam, Samudrapom. "Efficient Dairy Cow Behavior Recognition: A Two-Pathway X3DFast Model". AZoAi. 22 December 2024. <https://www.azoai.com/news/20231124/Efficient-Dairy-Cow-Behavior-Recognition-A-Two-Pathway-X3DFast-Model.aspx>.

  • Chicago

    Dam, Samudrapom. "Efficient Dairy Cow Behavior Recognition: A Two-Pathway X3DFast Model". AZoAi. https://www.azoai.com/news/20231124/Efficient-Dairy-Cow-Behavior-Recognition-A-Two-Pathway-X3DFast-Model.aspx. (accessed December 22, 2024).

  • Harvard

    Dam, Samudrapom. 2023. Efficient Dairy Cow Behavior Recognition: A Two-Pathway X3DFast Model. AZoAi, viewed 22 December 2024, https://www.azoai.com/news/20231124/Efficient-Dairy-Cow-Behavior-Recognition-A-Two-Pathway-X3DFast-Model.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.