In an article recently published in the journal Remote Sensing in Ecology and Conservation, researchers investigated the feasibility of using the MegaDetector open-source object detection model (ODM) for automated cross-regional wildlife and visitor monitoring using camera traps.
Background
Human activities continuously increase in natural areas, significantly affecting wildlife behavior worldwide. Wildlife is changing its temporal activity, habitat use, and movement patterns to adapt to ever-rising human presence. Thus, understanding the wildlife-human interactions on broad temporal and spatial scales and including this information in wildlife conservation and management has become crucial.
Big data-based approaches, such as machine learning (ML) and camera trapping, can substantially improve the outcomes of studies on spatiotemporal interactions of wildlife and human activities, as wildlife-human interactions are not easily generalizable and complex.
Camera traps are an extremely effective method in wildlife ecology for spatiotemporal data generation on several species as they are cost-efficient and non-invasive. Recently, the approach has become more effective for wildlife conservation, management, and research due to rapid improvements in ML models used to classify wildlife on image data.
Computer vision algorithms can attain high accuracies in classifying animal species and significantly outperform manual temporal scale classification. However, the high accuracy of these algorithms is limited only to those animal species that were labeled and included during the training of the ML model. Thus, these algorithms demonstrate low accuracy while classifying species not included during the training. Additionally, difficulties with untrained camera trap sites are another significant challenge.
Importance of ODMs
ODMs can be used to address these species- and site-dependence issues. These models can identify object locations on an image and classify them into basic categories, unlike the commonly used image classifiers that classify the complete image.
Open-source ODMs, such as MegaDetector, are trained using millions of images generated globally to identify basic object classes, such as animals, vehicles, and persons. These ODMs have also been utilized in several wildlife conservation programs worldwide.
MegaDetector can detect humans with extreme precision and animals with significantly high accuracy, which substantially increases image processing speed. Thus, using such automated ODMs can significantly reduce the financial and temporal efforts currently required for human-wildlife interaction assessments using camera traps and facilitate compliance with existing data protection regulations.
However, developing an extensively usable approach to improving the processing of camera trap image data from wildlife and human activities by combining open-source ODMs and camera trap data requires a thorough evaluation of methodological bottlenecks and restrictions.
Investigating general patterns in wildlife-human interactions requires cross-regional, large-scale, and long-term study designs, which necessitate using several human classifiers and different camera trap models. Additionally, different trail and site conditions, recreational activities, and seasons, coupled with variations in staff numbers in camera trap maintenance and research, increase the underlying data complexity and lead to biases in the detection models. Moreover, the error sources that reduce the automated detection accuracy have not been identified until now. The detection of error sources can define the site-specific workflows to improve the accuracy of ODM-generated datasets.
MegaDetector evaluation
In this study, researchers evaluated the performance of the open-source ODM MegaDetector in cross-regional wildlife and visitor monitoring using camera traps. The objective of the study was to assess a methodological approach for automated wildlife and visitor monitoring in several recreational areas using automated object detection and camera traps.
The MegaDetector ODM vehicle, human, and animal detection performances were evaluated using 352,426 images obtained from 159 on- and off-trail camera trap models, including CuddebackC2, CuddebackG, and Reconyx HyperFire2, in three study regions in Bavaria, Germany and manually classified by wildlife ecologists and students, with 229,100 images containing vehicles and humans, while 114,937 images containing animals.
Specifically, the performance of the MegaDetector ODM in classifying the camera trap images into empty images, vehicles, animals, and persons was compared with the results of manual classification of the same camera trap images.
Researchers used an ODM as an image classifier to address different backgrounds due to varying environmental conditions and camera trap sites. Additionally, they also investigated the structural misclassification patterns and assessed the detection model results for temporal analyses performed in ecological research.
Significance of the study
The MegaDetector ODM displayed an extremely high accuracy of 96.0%, 93.8%, and 99.3% in detecting animals, persons, and vehicles, respectively. Additionally, systematic misclassification patterns were identified and eliminated automatically. The detection model could be readily employed to count animals and people on images as MegaDetector slightly underestimated the object counts for persons, vehicles, and animals by −0.05, −0.01, and −0.01 counts per image, respectively.
Moreover, the temporal pattern in a long-term time series of manually classified wildlife and human activities was highly correlated with the detection model classification results. The diurnal kernel densities of activities were almost equivalent for both automated and manual classification.
To summarize, the findings of this study demonstrated the feasibility of using the MegaDetector ODM in the image classification process of the cross-regional camera trap studies without manual intervention. The MegaDetector ODM significantly accelerated the processing speed and was effective for long-term monitoring of wildlife and human activities while complying with privacy regulations.
Journal reference:
- Mitterwallner, V., Peters, A., Edelhoff, H., Mathes, G., Nguyen, H., Peters, W., Heurich, M., Steinbauer, M. J. (2023). Automated visitor and wildlife monitoring with camera traps and machine learning. Remote Sensing in Ecology and Conservation. https://zslpublications.onlinelibrary.wiley.com/doi/10.1002/rse2.367, https://doi.org/10.1002/rse2.367