JavaScript is disabled for your browser. Some features of this site may not work without it.

    Listar

    Todo RIUMAComunidades & ColeccionesPor fecha de publicaciónAutoresTítulosMateriasTipo de publicaciónCentrosDepartamentos/InstitutosEditoresEsta colecciónPor fecha de publicaciónAutoresTítulosMateriasTipo de publicaciónCentrosDepartamentos/InstitutosEditores

    Mi cuenta

    AccederRegistro

    Estadísticas

    Ver Estadísticas de uso

    DE INTERÉS

    Datos de investigaciónReglamento de ciencia abierta de la UMAPolítica de RIUMAPolitica de datos de investigación en RIUMAOpen Policy Finder (antes Sherpa-Romeo)Dulcinea
    Preguntas frecuentesManual de usoContacto/Sugerencias
    Ver ítem 
    •   RIUMA Principal
    • Investigación
    • Tesis doctorales
    • Ver ítem
    •   RIUMA Principal
    • Investigación
    • Tesis doctorales
    • Ver ítem

    Machine Learning for Bidirectional Translation between Different Sign and Oral Language.

    • Autor
      Saleem, Muhammad Imran
    • Director/es
      Luque-Nieto, Miguel ÁngelAutoridad Universidad de Málaga
    • Fecha
      2024
    • Fecha de lectura
      2023-09-27
    • Editorial/Editor
      UMA Editorial
    • Palabras clave
      Sordos; Lenguaje por signos; Procesado de señales; Aprendizaje automático (Inteligencia artificial); Reconocimiento de formas (Informática)
    • Resumen
      Deaf and mute (D-M) people are an integral part of society, and it is particularly important to provide them with a platform to be able to communicate without the need for any training or learning. These D-M individuals, who rely on sign language, but for effective communication, it is expected that others can understand sign language. Learning sign language is a challenge for those with no impairment. In practice, D-M face communication difficulties mainly because others, who generally do not know sign language, are unable to communicate with them. This thesis presents a solution to this problem through (i) a system enabling the non-deaf and mute (ND-M) to communicate with the D-M individuals without the need to learn sign language, and (ii) hand gestures of different languages are supported. The hand gestures of D-M people are acquired and processed using deep learning (DL), and multiple language support is achieved using supervised machine learning (ML). The D-M people are provided with a video interface where the hand gestures are acquired, and an audio interface to convert the gestures into speech. Speech from ND-M people is acquired and converted into text and hand gesture images. The system is easy to use, low cost, reliable, modular, based on a commercial-off-the-shelf (COTS) Leap Motion Device (LMD). A supervised ML dataset is created that provides multi-language communication between the D-M and ND-M people, which includes three sign language datasets, i.e., American Sign Language (ASL), Pakistani Sign Language (PSL), and Spanish Sign Language (SSL). The proposed system has been validated through a series of experiments, where the hand gesture detection accuracy of the system is more than 90% for most, while for certain scenarios, this is between 80% and 90% due to variations in hand gestures between D-M people.
    • URI
      https://hdl.handle.net/10630/30616
    • Compartir
      RefworksMendeley
    Mostrar el registro completo del ítem
    Ficheros
    TD_SALEEM, Muhammad Imran.pdf (18.09Mb)
    Colecciones
    • Tesis doctorales

    Estadísticas

    Buscar en Dimension
    REPOSITORIO INSTITUCIONAL UNIVERSIDAD DE MÁLAGA
    REPOSITORIO INSTITUCIONAL UNIVERSIDAD DE MÁLAGA
     

     

    REPOSITORIO INSTITUCIONAL UNIVERSIDAD DE MÁLAGA
    REPOSITORIO INSTITUCIONAL UNIVERSIDAD DE MÁLAGA