Exploring the use of Deep Reinforcement Learning to allocate tasks in Critical Adaptive Distributed Embedded Systems

Authors Ramón Rotaeche | Alberto Ballesteros | Julián Proenza Arenas
In Proceedings of the IEEE 26th International Conference on Emerging Technologies and Factory Automation (ETFA 2021), Västerås (Sweden), 2021.

Critical Adaptive Distributed Embedded Systems (CADES) must carry out a set of functionalities while fulfilling their associated real-time and dependability requirements. Moreover, they must be able to reconfigure themselves in a bounded time as the operational context changes. Finding a proper configuration can be non-trivial and time-consuming. Several studies have proposed Deep Reinforcement Learning (DRL) approaches to solve combinatorial optimization problems. In this paper, we explore the application of such approaches to CADES by solving a simple tasks allocation problem using DRL and comparing the results with three popular heuristics. The results show that DRL beats two of them and gets very close to the third, while requiring significantly less time to generate a solution.


Uso de cookies

Este sitio web utiliza cookies para que usted tenga la mejor experiencia de usuario. Si continúa navegando está dando su consentimiento para la aceptación de las mencionadas cookies y la aceptación de nuestra política de cookies, pinche el enlace para mayor información.

Aviso de cookies