Mostrar el registro sencillo del ítem

dc.contributor.authorMulero-Pázmány, Margarita Cristina
dc.contributor.authorHurtado, Sandro
dc.contributor.authorCardas Ezeiza, Cristian
dc.contributor.authorAntequera-Gómez, María Luisa
dc.contributor.authorBarba-González, Cristóbal 
dc.contributor.authorRomero-Pacheco, David 
dc.contributor.authorDíaz-Ruiz, Francisco
dc.contributor.authorNavas-Delgado, Ismael 
dc.contributor.authorReal-Giménez, Raimundo 
dc.date.accessioned2023-12-22T10:25:24Z
dc.date.available2023-12-22T10:25:24Z
dc.date.created2023
dc.date.issued2023
dc.identifier.urihttps://hdl.handle.net/10630/28475
dc.description.abstractCamera traps have gained high popularity for collecting animal images in a cost-effective and non-invasive manner, but manually examining these large volumes of images to extract valuable data is a laborious and costly process. Deep learning, specifically object detection techniques, constitutes a powerful tool for automating this task. Here, we describe the development and result of a deep-learning workflow based on MegaDetector and YOLOv5 for automatically detecting animals in camera trap images. For the development, we first used MegaDetector, which automatically generated bounding boxes for 93.2% of the images in the training set, differentiating animals, humans, vehicles, and empty photos. This annotation phase allowed to discard useless images. Then, we used the images containing animals within the training dataset to train four YOLOv5 models, each one built for a group of species of similar aspects as defined by a human expert. Using four expert models instead of one reduces the complexity and variance between species, allowing for more precise learning within each of the groups. The final result is a workflow where the end-user enters the camera trap images into a global model. Then, this global model redirects the images towards the appropriate expert model. Finally, the final animal classification into a particular species is based on the confidence rates provided by a weighted voting system implemented among the expert models. We validated this workflow using a dataset of 120.000 images collected by 100 camera traps over five years in Andalusian National Parks (Spain) with a representation of 24 mammal species. Our workflow approach improved the global classification F1-score from 0.92 to 0.96. It increased the precision for distinguishing similar species, for example from 0.41 to 0.96 for C. capreolus; and from 0.24 to 0.73 for D. dama, often confounded with other ungulate species, which demonstrates its potential for animal detection in images.es_ES
dc.description.sponsorshipUniversidad de Málaga. Campus de Excelencia Internacional Andalucía Tech.es_ES
dc.language.isoenges_ES
dc.subjectInteligencia artificial - Congresoses_ES
dc.subject.otherAutomatically detecting animalses_ES
dc.subject.otherMammalses_ES
dc.subject.otherDeep learninges_ES
dc.titleArtificial intelligence for automatically detecting animals in camera trap images: a combination of MegaDetector and YOLOv5es_ES
dc.typeconference outputes_ES
dc.centroFacultad de Cienciases_ES
dc.relation.eventtitleXIV Congreso Internacional SECEMes_ES
dc.relation.eventplaceGranollers, Barcelona, Españaes_ES
dc.relation.eventdate6/12/2023es_ES
dc.departamentoBiología Animal
dc.rights.accessRightsopen accesses_ES


Ficheros en el ítem

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem