Mostrar el registro sencillo del ítem

dc.contributor.advisorLuque-Nieto, Miguel Ángel 
dc.contributor.authorSaleem, Muhammad Imran
dc.contributor.otherIngeniería de Comunicacioneses_ES
dc.date.accessioned2024-02-23T08:00:44Z
dc.date.available2024-02-23T08:00:44Z
dc.date.created2023-09-02
dc.date.issued2024
dc.date.submitted2023-09-27
dc.identifier.urihttps://hdl.handle.net/10630/30616
dc.description.abstractDeaf and mute (D-M) people are an integral part of society, and it is particularly important to provide them with a platform to be able to communicate without the need for any training or learning. These D-M individuals, who rely on sign language, but for effective communication, it is expected that others can understand sign language. Learning sign language is a challenge for those with no impairment. In practice, D-M face communication difficulties mainly because others, who generally do not know sign language, are unable to communicate with them. This thesis presents a solution to this problem through (i) a system enabling the non-deaf and mute (ND-M) to communicate with the D-M individuals without the need to learn sign language, and (ii) hand gestures of different languages are supported. The hand gestures of D-M people are acquired and processed using deep learning (DL), and multiple language support is achieved using supervised machine learning (ML). The D-M people are provided with a video interface where the hand gestures are acquired, and an audio interface to convert the gestures into speech. Speech from ND-M people is acquired and converted into text and hand gesture images. The system is easy to use, low cost, reliable, modular, based on a commercial-off-the-shelf (COTS) Leap Motion Device (LMD). A supervised ML dataset is created that provides multi-language communication between the D-M and ND-M people, which includes three sign language datasets, i.e., American Sign Language (ASL), Pakistani Sign Language (PSL), and Spanish Sign Language (SSL). The proposed system has been validated through a series of experiments, where the hand gesture detection accuracy of the system is more than 90% for most, while for certain scenarios, this is between 80% and 90% due to variations in hand gestures between D-M people.es_ES
dc.language.isoenges_ES
dc.publisherUMA Editoriales_ES
dc.rightsinfo:eu-repo/semantics/openAccesses_ES
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectSordoses_ES
dc.subjectLenguaje por signoses_ES
dc.subjectProcesado de señaleses_ES
dc.subjectAprendizaje automático (Inteligencia artificial)es_ES
dc.subjectReconocimiento de formas (Informática)es_ES
dc.subject.otherDeaf and mute persones_ES
dc.subject.otherHand gesture recognitiones_ES
dc.subject.otherMulti-language processinges_ES
dc.subject.otherSign languagees_ES
dc.subject.otherSupervised machine learninges_ES
dc.titleMachine Learning for Bidirectional Translation between Different Sign and Oral Language.es_ES
dc.typeinfo:eu-repo/semantics/doctoralThesises_ES
dc.centroE.T.S.I. Telecomunicaciónes_ES
dc.rights.ccAttribution-NonCommercial-NoDerivatives 4.0 Internacional*


Ficheros en el ítem

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

Attribution-NonCommercial-NoDerivatives 4.0 Internacional
Excepto si se señala otra cosa, la licencia del ítem se describe como Attribution-NonCommercial-NoDerivatives 4.0 Internacional