Critical Adaptive Distributed Embedded Systems (CADES) must carry out a set of functionalities while fulfilling their associated real-time and dependability requirements. Moreover, they must be able to reconfigure themselves in a bounded time as the operational context changes. Finding a proper configuration can be non-trivial and time-consuming. Several studies have proposed Deep Reinforcement Learning (DRL) approaches to solve combinatorial optimization problems. In this paper, we explore the application of such approaches to CADES by solving a simple tasks allocation problem using DRL and comparing the results with three popular heuristics. The results show that DRL beats two of them and gets very close to the third, while requiring significantly less time to generate a solution.
Authors Ramón Rotaeche | Alberto Ballesteros | Julián Proenza Arenas
In Proceedings of the IEEE 26th International Conference on Emerging Technologies and Factory Automation (ETFA 2021), Västerås (Sweden), 2021.