Mostrar el registro sencillo del ítem
Managing and Deploying Distributed and Deep Neural Models Through Kafka-ML in the Cloud-to-Things Continuum
dc.contributor.author | Carnero, Alejandro | |
dc.contributor.author | Martín-Fernández, Cristian | |
dc.contributor.author | Torres, Daniel R. | |
dc.contributor.author | Rubio-Muñoz, Bartolomé | |
dc.contributor.author | Díaz-Rodríguez, Manuel | |
dc.date.accessioned | 2024-09-26T07:16:30Z | |
dc.date.available | 2024-09-26T07:16:30Z | |
dc.date.issued | 2021-08 | |
dc.identifier.citation | Carnero, Alejandro, et al. "Managing and deploying distributed and deep neural models through Kafka-ML in the cloud-to-things continuum." IEEE Access 9 (2021): 125478-125495. | es_ES |
dc.identifier.uri | https://hdl.handle.net/10630/33352 | |
dc.description.abstract | The Internet of Things (IoT) is constantly growing, generating an uninterrupted data stream pipeline to monitor physical world information. Hence, Artificial Intelligence (AI) continuously evolves, improving life quality and business and academic activities. Kafka-ML is an open-source framework that focuses on managing Machine Learning (ML) and AI pipelines through data streams in production scenarios. Consequently, it facilitates Deep Neural Network (DNN) deployments in real-world applications. However, this framework does not consider the distribution of DNN models on the Cloud-to-Things Continuum. Distributed DNN significantly reduces latency, allocating the computational and network load between different infrastructures. In this work, we have extended our Kafka-ML framework to support the management and deployment of Distributed DNN throughout the Cloud-to-Things Continuum. Moreover, we have considered the possibility of including early exits in the Cloud-to-Things layers to provide immediate responses upon predictions. We have evaluated these new features by adapting and deploying the DNN model AlexNet in three different Cloud-to-Things scenarios. Experiments demonstrate that Kafka-ML can significantly improve response time and throughput by distributing DNN models throughout the Cloud-to-Things Continuum, compared to a Cloud-only deployment. | es_ES |
dc.description.sponsorship | 10.13039/501100004837-Spanish Projects “rFOG: Improving Latency and Reliability of Offloaded Computation to the FOG for Critical Services” (Grant Number: RT2018-099777-B-100). 10.13039/501100006461-“IntegraDos: Providing Real-Time Services for the Internet of Things through Cloud Sensor Integration” (Grant Number: PY20_00788). 10.13039/100009473-“Advanced Monitoring System Based on Deep Learning Services in Fog” (Grant Number: UMA18FEDERJA-215) | es_ES |
dc.language.iso | eng | es_ES |
dc.publisher | IEEE | es_ES |
dc.rights | Atribución 4.0 Internacional | * |
dc.rights.uri | http://creativecommons.org/licenses/by/4.0/ | * |
dc.subject | Internet de los objetos | es_ES |
dc.subject.other | Distributed deep neural networks | es_ES |
dc.subject.other | data streams | es_ES |
dc.subject.other | cloud computing | es_ES |
dc.subject.other | fog/edge computing | es_ES |
dc.subject.other | machine learning | es_ES |
dc.subject.other | artificial intelligence | es_ES |
dc.title | Managing and Deploying Distributed and Deep Neural Models Through Kafka-ML in the Cloud-to-Things Continuum | es_ES |
dc.type | journal article | es_ES |
dc.centro | E.T.S.I. Informática | es_ES |
dc.identifier.doi | 10.1109/ACCESS.2021.3110291 | |
dc.type.hasVersion | VoR | es_ES |
dc.departamento | Instituto de Tecnología e Ingeniería del Software de la Universidad de Málaga | |
dc.rights.accessRights | open access | es_ES |