Mostrar el registro sencillo del ítem

dc.contributor.authorGarcía Aguilar, Iván
dc.contributor.authorGarcía-González, Jorge
dc.contributor.authorLuque-Baena, Rafael Marcos 
dc.contributor.authorLópez-Rubio, Ezequiel 
dc.date.accessioned2023-04-19T11:54:00Z
dc.date.available2023-04-19T11:54:00Z
dc.date.issued2023
dc.identifier.citationGarcía-Aguilar, García-González, J., Luque-Baena, R. M., & López-Rubio, E. (2023). Automated labeling of training data for improved object detection in traffic videos by fine-tuned deep convolutional neural networks. Pattern Recognition Letters, 167, 45–52. https://doi.org/10.1016/j.patrec.2023.01.015es_ES
dc.identifier.urihttps://hdl.handle.net/10630/26302
dc.description.abstractThe exponential increase in the use of technology in road management systems has led to real-time vi- sual information in thousands of locations on road networks. A previous step in preventing or detecting accidents involves identifying vehicles on the road. The application of convolutional neural networks in object detection has significantly improved this field, enhancing classical computer vision techniques. Al- though, there are deficiencies due to the low detection rate provided by the available pre-trained models, especially for small objects. The main drawback is that they require manual labeling of the vehicles that appear in the images from each IP camera located on the road network to retrain the model. This task is not feasible if we have thousands of cameras distributed across the extensive road network of each nation or state. Our proposal presented a new automatic procedure for detecting small-scale objects in traffic sequences. In the first stage, vehicle patterns detected from a set of frames are generated automatically through an offline process, using super-resolution techniques and pre-trained object detection networks. Subsequently, the object detection model is retrained with the previously obtained data, adapting it to the analyzed scene. Finally, already online and in real-time, the retrained model is used in the rest of the traffic sequence or the video stream generated by the camera. This framework has been successfully tested on the NGSIM and the GRAM datasets.es_ES
dc.description.sponsorshipFunding for open access charge: Universidad de Málaga/CBUAes_ES
dc.language.isoenges_ES
dc.publisherElsevieres_ES
dc.rightsinfo:eu-repo/semantics/openAccesses_ES
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.subjectRedes de neuronas (Informática)es_ES
dc.subject.otherObject detectiones_ES
dc.subject.otherSmall scalees_ES
dc.subject.otherSuper-resolutiones_ES
dc.subject.otherConvolutional neural networkses_ES
dc.titleAutomate d lab eling of training data for improved object detection in traffic videos by fine-tuned deep convolutional neural networkses_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.centroE.T.S.I. Informáticaes_ES
dc.identifier.doihttps://doi.org/10.1016/j.patrec.2023.01.015
dc.rights.ccAtribución 4.0 Internacional*
dc.type.hasVersioninfo:eu-repo/semantics/publishedVersiones_ES


Ficheros en el ítem

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

Atribución 4.0 Internacional
Excepto si se señala otra cosa, la licencia del ítem se describe como Atribución 4.0 Internacional