Very few work has been done to test multi-robot task allocation (MRTA)
swarm algorithms in real time systems, where each task must be executed before a
deadline. In this paper a comparative study has been done between several swarm
algorithms and a centralized task allocation method. Moreover, a new swarm algorithm is proposed which improves significantly the results using very few communication capacity between robots. This new algorithm can reduce the interference between robots produced when two or more robots select the same task to execute. A very simple but effective learning algorithm has been also implemented to fit the parameters of this algorithm. To verify the results a foraging task has been used under different environments. The results show that the performance of this algorithm is very close to some centralized approaches results.
Publication type: Conferences
TRIDENT is a STREP project recently approved by the European Commission whose proposal was submitted to the ICT call 4 of the 7th Framework Program. The project proposes a new methodology for multipurpose underwater intervention tasks. To that end, a cooperative team formed with an Autonomous Surface Craft and an Intervention Autonomous Underwater Vehicle will be used. The proposed methodology splits the mission in two stages mainly devoted to survey and intervention tasks, respectively. The project brings together research skills specific to the marine environments in navigation and mapping for underwater robotics, multi-sensory perception, intelligent control architectures, vehicle-
manipulator systems and dexterous manipulation. TRIDENT is a three years project and its start is planned by first months of 2010.
We present and evaluate a method for estimating the relevance and calibrating the values of parameters of an evolutionary algorithm. The method provides an information theoretic measure on how sensitive a parameter is to the choice of its value. This can be used to estimate the relevance of parameters, to choose between different possible sets of parameters, and to allocate resources to the calibration of relevant parameters. The method calibrates the evolutionary algorithm to reach a high performance, while retaining a maximum of robustness and generalizability. We demonstrate the method on an agent-based application from evolutionary economics and show how the method helps to design an evolutionary algorithm that allows the agents to achieve a high welfare with a minimum of algorithmic complexity.
The main objective of this paper is to present and evaluate a method that helps to calibrate the parameters of an evolutionary algorithm in a systematic and semi-automated manner. The method for Relevance Estimation and Value Calibration of EA parameters (REVAC) is empirically evaluated in two different ways. First, we use abstract test cases reflecting the typical properties of EA parameter spaces. Here we observe that REVAC is able to approximate the exact (hand-coded) relevance of parameters and it works robustly with measurement noise that is highly variable and not normally distributed. Second, we use REVAC for calibrating GAs for a number of common objective functions. Here we obtain a common sense validation, REVAC finds mutation rate pm much more sensitive
than crossover rate pc and it recommends intuitively sound values: pm between 0.01 and 0.1, and 0.6 ≤ pc ≤ 1.0.
We study the benefit of measurement replication when using the Relevance Estimation and Value Calibration method to calibrate a genetic algorithm. We find that replication is not essential to REVAC, which makes it a strong alternative to existing statistical tools which are computationally costly.
Calibrating the parameters of an evolutionary algorithm (EA) is a laborious task. The highly stochastic nature of an EA typically leads to a high variance of the measurements. The standard statistical method to reduce variance is measurement replication, i.e., averaging over several test runs with identical parameter settings. The computational cost of measurement replication scales with the variance and is
often too high to allow for results of statistical significance. In this paper we study an alternative: the REVAC method for Relevance Estimation and Value Calibration, and we investigate how different levels of measurement replication influence the cost and quality of its calibration results. Two sets of experiments are reported: calibrating a genetic algorithm on standard benchmark problems, and calibrating a complex simulation in evolutionary agent-based economics. We find that measurement
replication is not essential to REVAC, which emerges as a strong and efficient alternative to existing statistical methods.
We present an empirical study on the impact of different design choices on the performance of an evolutionary algorithm (EA). Four EA components are considered—parent selection, survivor selection, recombination and mutation—and for each component we study the impact of choosing the right operator, and of tuning its free parameter(s). We tune 120 different combinations of EA operators to 4 different classes of fitness landscapes, and measure the cost of tuning. We find that components differ greatly in importance. Typically the choice of operator for parent selection has the greatest impact, and mutation needs the most tuning. Regarding individual EAs however, the impact of design choices for one component depends on the choices for other components, as well as on the available amount of resources for tuning.
Calibrating an evolutionary algorithm (EA) means finding the right values of algorithm parameters for a given problem. This issue is highly relevant, because it has a high impact (the performance of EAs does depend on appropriate parameter values), and it occurs frequently (parameter values
must be set before all EA runs). This issue is also highly challenging, because finding good parameter values is a difficult task. In this paper we propose an algorithmic approach to EA calibration by describing a method, called REVAC, that can determine good parameter values in an automated manner on any given problem instance. We validate this method by comparing it with the conventional hand-based calibration and another algorithmic approach based on the classical meta-GA. Comparative experiments on a set of randomly generated problem instances with various levels of multi-modality show that GAs calibrated with REVAC can outperform those calibrated
by hand and by the meta-GA.