AI Camera Traps With Continual Learning Boost Real-Time Wildlife Monitoring Accuracy

Revolutionizing conservation efforts: How low-cost AI camera traps with continual learning are transforming real-time wildlife monitoring by adapting to diverse environments and improving species detection accuracy in the field.

Study: Reliable and efficient integration of AI into camera traps for smart wildlife monitoring based on continual learning. Image Credit: Krasula / ShutterstockStudy: Reliable and efficient integration of AI into camera traps for smart wildlife monitoring based on continual learning. Image Credit: Krasula / Shutterstock

In an article published in the journal Ecological Informatics, researchers presented an efficient approach to integrating artificial intelligence (AI) processing pipelines in camera traps for smart on-site wildlife monitoring.

They detailed the development of two low-cost smart camera trap prototypes, creating two publicly available datasets and using advanced AI techniques such as continual learning and explainable AI. The approach improved on-site inference accuracy by 10% compared to off-site methods like MegaDetector, with practical insights on system implementation and training.

However, the study also highlighted the importance of on-device continual learning to adapt AI models to varying environmental conditions, which is critical for ensuring reliable performance in the field.

Background

Camera traps have long been used to monitor ecosystems, capturing species diversity, abundance, and behavior data. Traditional camera traps, however, generate a massive volume of visual data requiring off-site analysis, which can be resource-intensive.

Recent advancements in AI have led to the development of desktop tools to automate this analysis, but these still rely on post-capture data processing. Previous work focused on tasks like species identification and filtering blank images but lacked real-time, on-site processing capabilities.

The shift toward on-site analytics offers several advantages, including reduced data storage needs and quicker response times, such as automatically releasing non-target species from traps. Previous research has explored AI models and hardware for wildlife monitoring, but these systems often demand significant computational resources, which limits their practical application in remote environments.

This paper filled a crucial gap by presenting two low-cost prototypes of AI-enabled camera traps that provide real-time, on-site inference without sacrificing battery life or form factor. Additionally, the study introduced continual learning as a key method for improving model performance over time by allowing the system to adapt to new data collected in the field. The research also offered publicly available datasets and achieved higher accuracy than current off-site models through advanced AI techniques like transfer learning and continual learning, making it a significant advancement in wildlife monitoring technology.

Methodology and Preliminary Analysis

The proposed methodology for developing a smart camera trap involved four key steps. First, data collection and labeling were essential for constructing a balanced dataset, which included images from public sources and those captured in the field. Manually labeling these images into categories was time-intensive but critical for training a reliable deep neural network (DNN).

Second, network training involved using pre-trained architectures like SqueezeNet to avoid starting from scratch. Enhancements like data augmentation and architectural adjustments improved model accuracy.

Third, system integration focused on fitting AI pipelines within the constraints of edge devices. This step required hardware-efficient software and possibly low-level coding techniques like multi-threading to optimize performance. Selecting additional hardware components was also part of this stage.

Lastly, comprehensive field testing was necessary for validating the system under real-world conditions. The researchers noted that initial model performance often deviated significantly from validation results due to environmental factors such as varying illumination and vegetation. Testing in diverse environments allowed for iterative refinement until specifications were met. In their preliminary analysis, the researchers evaluated AI-based off-site processing using SqueezeNet, which was trained on a custom dataset from Sierra de Aracena Natural Park.

Tasks included blank image filtering, species identification, and concurrent filtering and identification. The SqueezeNet model, trained on both a custom and publicly available Snapshot Serengeti dataset, demonstrated strong generalization capabilities with reasonable accuracy, particularly in blank filtering.

System Design and Implementation

The system consisted of two AI-powered camera traps using Raspberry Pi platforms. The first prototype used Raspberry Pi 4B with a camera module and motion-detecting PIR sensor, powered by a five-volt (V) battery at a cost of 100 United States Dollars(USD). The second, still in development, used a smaller, energy-efficient Raspberry Pi Zero twi watts (W) with a no infrared filter (NoIR) camera for nocturnal use.

The software included Raspberry Pi operating system (OS), Open computer vision (CV) for image processing, and TensorFlow Lite for AI detection. Upon motion detection, the system captured frames, classified images, and sent alerts via email or message queuing telemetry transport (MQTT).

Tests showed Raspberry Pi Zero was more energy-efficient, using 0.65W when idle and 2.65W during inference. The system processed images in under two seconds, ensuring quick responses and cost-effective performance.

Adapting AI Models for Field Use

Initial model performance in real-world settings often deviated from validation results due to environmental differences. To address these issues, the researchers implemented on-device continual learning, which allowed the AI models to adapt to new data specific to the camera's location. Field tests of a smart camera trap, equipped with a SqueezeNet model trained on Dataset I, were conducted in the Sierra de Aracena Natural Park. This location, not previously covered by commercial camera traps, provided untrained scenarios that revealed limitations of the model, including high false positives and difficulties in detecting distant or partially occluded animals.

When applied to images from the field (Dataset II), the model showed reduced performance with low precision and accuracy due to environmental challenges such as varying illumination and vegetation. The researchers used explainable AI techniques like gradient-weighted class activation mapping (Grad-CAM) to identify the causes of false positives, such as the AI mistakenly interpreting vegetation as animals and false negatives due to difficult-to-detect animal positions.

To address these issues, the researchers proposed and tested a method that included on-device training and continual learning. On-device learning involved training the model with location-specific data, while continual learning updated the model with new data to adapt to changing conditions.

Preliminary tests with synthetic datasets and continual learning improved precision and recall, but challenges like data drift remained. Hyper-parameter tuning was performed to optimize training efficiency and performance on embedded devices.

Conclusion

In conclusion, the researchers introduced an innovative approach to integrating AI with camera traps for real-time wildlife monitoring. Incorporating continual learning into the system was a key advancement, allowing the AI to adapt to new environmental conditions, thus enhancing long-term performance in the field. The low-cost prototypes and advanced AI techniques improved on-site accuracy, offering a promising solution to the challenges of field use.

The research opened opportunities for future developments in continual learning, species classification, and privacy-focused designs, providing a foundation for more adaptive and efficient conservation technologies.

Journal reference:
  • Velasco-Montero, D., Fernández-Berni, J., Carmona-Galán, R., Sanglas, A., & Palomares, F. (2024). Reliable and efficient integration of AI into camera traps for smart wildlife monitoring based on continual learning. Ecological Informatics, 102815. DOI: 10.1016/j.ecoinf.2024.102815, https://www.sciencedirect.com/science/article/pii/S1574954124003571
Soham Nandi

Written by

Soham Nandi

Soham Nandi is a technical writer based in Memari, India. His academic background is in Computer Science Engineering, specializing in Artificial Intelligence and Machine learning. He has extensive experience in Data Analytics, Machine Learning, and Python. He has worked on group projects that required the implementation of Computer Vision, Image Classification, and App Development.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Nandi, Soham. (2024, September 11). AI Camera Traps With Continual Learning Boost Real-Time Wildlife Monitoring Accuracy. AZoAi. Retrieved on November 21, 2024 from https://www.azoai.com/news/20240911/AI-Camera-Traps-With-Continual-Learning-Boost-Real-Time-Wildlife-Monitoring-Accuracy.aspx.

  • MLA

    Nandi, Soham. "AI Camera Traps With Continual Learning Boost Real-Time Wildlife Monitoring Accuracy". AZoAi. 21 November 2024. <https://www.azoai.com/news/20240911/AI-Camera-Traps-With-Continual-Learning-Boost-Real-Time-Wildlife-Monitoring-Accuracy.aspx>.

  • Chicago

    Nandi, Soham. "AI Camera Traps With Continual Learning Boost Real-Time Wildlife Monitoring Accuracy". AZoAi. https://www.azoai.com/news/20240911/AI-Camera-Traps-With-Continual-Learning-Boost-Real-Time-Wildlife-Monitoring-Accuracy.aspx. (accessed November 21, 2024).

  • Harvard

    Nandi, Soham. 2024. AI Camera Traps With Continual Learning Boost Real-Time Wildlife Monitoring Accuracy. AZoAi, viewed 21 November 2024, https://www.azoai.com/news/20240911/AI-Camera-Traps-With-Continual-Learning-Boost-Real-Time-Wildlife-Monitoring-Accuracy.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.