Mostrar el registro sencillo del ítem

dc.contributor.authorCarnero, Alejandro
dc.contributor.authorMartín-Fernández, Cristian 
dc.contributor.authorTorres, Daniel R.
dc.contributor.authorRubio-Muñoz, Bartolomé 
dc.contributor.authorDíaz-Rodríguez, Manuel 
dc.date.accessioned2024-09-26T07:16:30Z
dc.date.available2024-09-26T07:16:30Z
dc.date.issued2021-08
dc.identifier.citationCarnero, Alejandro, et al. "Managing and deploying distributed and deep neural models through Kafka-ML in the cloud-to-things continuum." IEEE Access 9 (2021): 125478-125495.es_ES
dc.identifier.urihttps://hdl.handle.net/10630/33352
dc.description.abstractThe Internet of Things (IoT) is constantly growing, generating an uninterrupted data stream pipeline to monitor physical world information. Hence, Artificial Intelligence (AI) continuously evolves, improving life quality and business and academic activities. Kafka-ML is an open-source framework that focuses on managing Machine Learning (ML) and AI pipelines through data streams in production scenarios. Consequently, it facilitates Deep Neural Network (DNN) deployments in real-world applications. However, this framework does not consider the distribution of DNN models on the Cloud-to-Things Continuum. Distributed DNN significantly reduces latency, allocating the computational and network load between different infrastructures. In this work, we have extended our Kafka-ML framework to support the management and deployment of Distributed DNN throughout the Cloud-to-Things Continuum. Moreover, we have considered the possibility of including early exits in the Cloud-to-Things layers to provide immediate responses upon predictions. We have evaluated these new features by adapting and deploying the DNN model AlexNet in three different Cloud-to-Things scenarios. Experiments demonstrate that Kafka-ML can significantly improve response time and throughput by distributing DNN models throughout the Cloud-to-Things Continuum, compared to a Cloud-only deployment.es_ES
dc.description.sponsorship10.13039/501100004837-Spanish Projects “rFOG: Improving Latency and Reliability of Offloaded Computation to the FOG for Critical Services” (Grant Number: RT2018-099777-B-100). 10.13039/501100006461-“IntegraDos: Providing Real-Time Services for the Internet of Things through Cloud Sensor Integration” (Grant Number: PY20_00788). 10.13039/100009473-“Advanced Monitoring System Based on Deep Learning Services in Fog” (Grant Number: UMA18FEDERJA-215)es_ES
dc.language.isoenges_ES
dc.publisherIEEEes_ES
dc.rightsAtribución 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.subjectInternet de los objetoses_ES
dc.subject.otherDistributed deep neural networkses_ES
dc.subject.otherdata streamses_ES
dc.subject.othercloud computinges_ES
dc.subject.otherfog/edge computinges_ES
dc.subject.othermachine learninges_ES
dc.subject.otherartificial intelligencees_ES
dc.titleManaging and Deploying Distributed and Deep Neural Models Through Kafka-ML in the Cloud-to-Things Continuumes_ES
dc.typejournal articlees_ES
dc.centroE.T.S.I. Informáticaes_ES
dc.identifier.doi10.1109/ACCESS.2021.3110291
dc.type.hasVersionVoRes_ES
dc.departamentoInstituto de Tecnología e Ingeniería del Software de la Universidad de Málaga
dc.rights.accessRightsopen accesses_ES


Ficheros en el ítem

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem