Mostrar el registro sencillo del ítem

dc.contributor.authorMartin, Francisco
dc.contributor.authorGonzalez, Fernando
dc.contributor.authorGuerrero, Jose Miguel
dc.contributor.authorFernandez-Carmona, Manuel
dc.contributor.authorGines, Jonatan
dc.date.accessioned2024-10-02T08:25:30Z
dc.date.available2024-10-02T08:25:30Z
dc.date.issued2021
dc.identifier.urihttps://hdl.handle.net/10630/34174
dc.description.abstractThe perception and identification of visual stimuli from the environment is a fundamental capacity of autonomous mobile robots. Current deep learning techniques make it possible to identify and segment objects of interest in an image. This paper presents a novel algorithm to segment the object's space from a deep segmentation of an image taken by a 3D camera. The proposed approach solves the boundary pixel problem that appears when a direct mapping from segmented pixels to their correspondence in the point cloud is used. We validate our approach by comparing baseline approaches using real images taken by a 3D camera, showing that our method outperforms their results in terms of accuracy and reliability. As an application of the proposed algorithm, we present a semantic mapping approach for a mobile robot's indoor environments.es_ES
dc.language.isoenges_ES
dc.publisherMDPIes_ES
dc.rightsinfo:eu-repo/semantics/closedAccesses_ES
dc.subjectCiencias aplicadases_ES
dc.subjectRobots autónomoses_ES
dc.subject.otherImage segmentationes_ES
dc.subject.otherDeep learninges_ES
dc.subject.other3D semantic mappinges_ES
dc.titleSemantic 3D mapping from deep image segmentationes_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.centroE.T.S.I. Telecomunicaciónes_ES
dc.identifier.doi10.3390/app11041953
dc.type.hasVersioninfo:eu-repo/semantics/publishedVersiones_ES


Ficheros en el ítem

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem