# Evolutionary Algorithms and Dynamic Optimization Problems

Karsten Weicker
Osnabrück, Germany, Der andere Verlag, 2003.

### Abstract

This thesis examines evolutionary algorithms, a universal optimization method, applied to dynamic problems, i.e.\ the problems are changing during optimization. The thesis is motivated by a lack of foundations for the field and the incomparability of most publications that are of an empirical nature.

To establish a basis for the comparison of evolutionary algorithms applied to different time-dependent problems, a mathematical framework for the description of the majority of dynamic problems is introduced in the first part of the thesis. Within the framework, the dynamics of a problem are defined exactly for the changes between the discrete time steps. At one time step, the overall fitness function is defined as the maximum of several static component functions at each point of the search space. Each component functions may be modified between the time steps by stretching it with respect to the optimum, by rescaling the fitness, and by coordinate transformations to relocate the optimum in the search space. The properties of the modifications can be described mathematically within the framework. This leads to a classification of the considered dynamic problems. As a result examinations on distinct problems can be integrated in an overall picture using their similarities concerning the dynamics. On the one hand, this is used to create a mapping between problem classes and special techniques used in dynamic environments. This mapping is based on an analysis of the literature in the field. It is a first step toward the identification of design patterns in evolutionary computing to support the development of evolutionary algorithms for new dynamic problems. On the other hand, the problem classes of the framework are used as basis for an examination of performance measures within dynamic environments.

The second part of the thesis analyzes one specific technique, namely local variation, applied to one problem class, namely tracking problems or drifting landscapes, in detail. For this purpose, the optimization process of a $(1,\lambda)$-strategy applied to a simple two-dimensional problem is modeled using a Markov chain. This enables the exact computation of the probability to be within a certain distance to the optimum at any time step of the optimization process. By variation of the strength of the dynamics, the step width parameter of the mutation, certain paradigms concerning the mutation operator, and the population size, findings concerning the optimal calibration of the optimization methods are derived. This leads to ten qualitative design rules for the application of local variation to tracking problems. In particular statements are made concerning the choice of the step width parameter, directed mutations, and mutations penalizing small steps. Good settings of the offspring population size are deduced by correlating fitness evaluations and the strength of the dynamics. Moreover, external memorizing techniques and self-adaptation mechanisms are considered. In order to exhaust the frontiers of the technique, an extreme case of dynamic problems is analyzed that is hard for current self-adaptation techniques. This problem may serve as benchmark for the development of self-adaptation mechanisms tailored to dynamic problems. The design rules are validated rudimentary on a small set of test functions using evolution strategies as optimization algorithm. Two new techniques to cope with the new benchmark are proposed.