Abstract
Passive acoustic monitoring is a promising tool for monitoring at-risk populations of vocal species, yet extracting relevant information from large acoustic datasets can be time- consuming, creating a bottleneck at the point of analysis. We adapted an open-source framework for deep learning in bioacoustics to automatically detect Bornean white-bearded gibbon (Hylobates albibarbis) “great call” vocalisations in a long-term acoustic dataset from a rainforest location in Borneo. We describe the steps involved in developing this solution, such as collecting audio recordings, developing training and testing datasets, training neural-network models, and evaluating model performance. Our best model performed at a satisfactory level (F score = 0.87), identifying 98% of the highest-quality calls from 90 hours of manually-annotated audio recordings. We also found no significant difference in the distribution of great call detections over time between the manual annotations and the model’s output, and greatly reduced analysis times when compared to a human observer. Future work should seek to apply our model to long-term acoustic datasets to understand spatiotemporal variations in H. albibarbis’ calling activity. With additional information, such as detection probability over distance, we demonstrate how our model could be used to monitor gibbon population density and spatial distribution on an unprecedented scale.
SDGs:
1. SDGs 4:Quality Education
2. SDGs 9:Industry, Innovation, and Infrastructure
3. SDGs 13:Climate Action
4. SDGs 15:Life on Land
5. SDGs 17:Partnerships for the Goals
Link Dokumen:
Download