JavaScript is disabled for your browser. Some features of this site may not work without it.

    Listar

    Todo RIUMAComunidades & ColeccionesPor fecha de publicaciónAutoresTítulosMateriasTipo de publicaciónCentrosDepartamentos/InstitutosEditoresEsta colecciónPor fecha de publicaciónAutoresTítulosMateriasTipo de publicaciónCentrosDepartamentos/InstitutosEditores

    Mi cuenta

    AccederRegistro

    Estadísticas

    Ver Estadísticas de uso

    DE INTERÉS

    Datos de investigaciónReglamento de ciencia abierta de la UMAPolítica de RIUMAPolitica de datos de investigación en RIUMAOpen Policy Finder (antes Sherpa-Romeo)Dulcinea
    Preguntas frecuentesManual de usoContacto/Sugerencias
    Ver ítem 
    •   RIUMA Principal
    • Investigación
    • Artículos
    • Ver ítem
    •   RIUMA Principal
    • Investigación
    • Artículos
    • Ver ítem

    Context-aware 3D object anchoring for mobile robots

    • Autor
      Günther, Martin; Ruiz-Sarmiento, José RaúlAutoridad Universidad de Málaga; Galindo-Andrades, CiprianoAutoridad Universidad de Málaga; González-Jiménez, Antonio JavierAutoridad Universidad de Málaga; Hertzberg, Joachim
    • Fecha
      2018
    • Editorial/Editor
      Elsevier
    • Palabras clave
      Robots autónomos; Autómatas-Sistemas de control
    • Resumen
      A world model representing the elements in a robot’s environment needs to maintain a correspondence between the objects being observed and their internal representations, which is known as the anchoring problem. Anchoring is a key aspect for an intelligent robot operation, since it enables high-level functions such as task planning and execution. This work presents an anchoring system that continually integrates new observations from a 3D object recognition algorithm into a probabilistic world model. Our system takes advantage of the contextual relations inherent to human-made spaces in order to improve the classification results of the baseline object recognition system. To achieve that, the system builds a graph-based world model containing the objects in the scene (both in the current and previously perceived observations), which is exploited by a Probabilistic Graphical Model (PGM) in order to leverage contextual information during recognition. The world model also enables the system to exploit information about objects beyond the current field of view of the robot sensors. Most importantly, this is done in an online fashion, overcoming both the disadvantages of single-shot recognition systems (e.g., limited sensor aperture) and offline recognition systems that require prior registration of all frames of a scene (e.g., dynamic scenes, unsuitability for plan-based robot control). We also propose a novel way to include the outcome of local object recognition methods in the PGM, which results in a decrease in the usually high model learning complexity and an increase in the system performance. The system performance has been assessed with a dataset collected by a mobile robot from restaurant-like settings, obtaining positive results for both its data association and object recognition capabilities. The system has been successfully used in the RACE robotic architecture.
    • URI
      https://hdl.handle.net/10630/34345
    • DOI
      https://dx.doi.org/https://doi.org/10.1016/j.robot.2018.08.016
    • Compartir
      RefworksMendeley
    Mostrar el registro completo del ítem
    Ficheros
    2018 - RAS - Context-Aware 3D Object Anchoring for Mobile Robots.pdf (3.311Mb)
    Colecciones
    • Artículos

    Estadísticas

    REPOSITORIO INSTITUCIONAL UNIVERSIDAD DE MÁLAGA
    REPOSITORIO INSTITUCIONAL UNIVERSIDAD DE MÁLAGA
     

     

    REPOSITORIO INSTITUCIONAL UNIVERSIDAD DE MÁLAGA
    REPOSITORIO INSTITUCIONAL UNIVERSIDAD DE MÁLAGA