Due to the steadily increasing amount of suggested optimization algorithms the selection of the “best” algorithm for a given optimization problem has become a very complex challenge. A formal discussion of this issue dates back to as early as 1976 and is denoted as the algorithm selection problem (AS). As it is well-known that a global best algorithm for all kind of optimization problems does not exist, one is especially interested in giving algorithm recommendations for certain classes of optimization problems which differ in the kind of exhibited problem characteristics. Currently, the AS problem is still an unsolved issue except from very special cases.
Systematically designed benchmarks form the basis for meaningfully addressing the AS problem. In recent years several benchmark sets comprising test problems with different characteristics, both for single-objective as well as multiobjective continuous optimization tasks were constructed. However, the performance assessment and ranking of the participating algorithms was usually addressed in a rather ad-hoc manner. Thus, the validity of the results might be questionable. In general, the outcome of a benchmark study strongly depends on the kind of analysis and interpretation method used. In Mersmann et al. (2010, see link below) a systematic benchmarking framework was introduced. The test problem set has to be constructed such that it ideally represents the whole problem domain in order to allow for a generalization of the benchmarking results.
However, performance rankings of optimization algorithms alone are not sufficient in order to give algorithm recommendations for unseen problems. One is rather interested in deriving rules for determining how problem properties influence algorithm performance and in grouping test problems into classes for which similar performance of the optimization algorithms can be observed. Exploratory Landscape Analysis (ELA) aims at solving these issues by deriving cheaply computable problem features based on which models relating features to algorithm performance can be constructed based on the benchmark experiments. The final goal is an accurate prediction of the best suited algorithm for an arbitrary optimization problem based on the computed features.
During the last years the respective research groups at (formerly) TU Dortmund and currently University of Münster and LMU Munich successfully made important steps into this direction for single-objective continuous optimization problems. Links to selected papers are provided below.
Additionally, the groups focus on combinatorial optimization problems as well in collaboration with the research groups of Frank Neumann (University of Adelaide, Australia) and Holger Hoos (University of British Columbia, Canada). In particular, a set of meaningful experimental features for characterizing properties of Traveling Salesman problems (TSP) was set up and it was investigated which properties determine the hardness of TSP instances for specific algorithms. Additionally, feature-based per-instance algorithm selection for TSP is a matter of current research.
Selected Research Articles and Material
Benchmarking Concepts
- Mersmann, O., Preuss, M., Trautmann, H., Bischl, B. and Weihs, C. (2015). Analyzing the BBOB Results by Means of Benchmarking Concepts. Evolutionary Computation Journal, Spring 2015, Vol. 23, No. 1, Pages 161-185
- Mersmann, O., Trautmann, H., Naujoks, B. and Weihs, C. (2010). Benchmarking evolutionary multiobjective optimization algorithms. In H. Ishibuchi et al., editors, IEEE Congress on Evolutionary Computation (CEC), pages 1311–1318. IEEE Press, Piscataway NJ
Exploratory Landscape Analysis / Algorithm Selection for Continuous Optimization Problems
- Hanster, C. and Kerschke, P. (2017). flaccogui: Exploratory Landscape Analysis for Everyone. In Proceedings of the 19th Annual Conference on Genetic and Evolutionary Computation (GECCO) Companion, pages 1215 – 1222. ACM.
- Kerschke, P., Preuss, M., Wessing, S., & Trautmann, H. (2015). Detecting Funnel Structures by Means of Exploratory Landscape Analysis. In Proceedings of the 17th Annual Conference on Genetic and Evolutionary Computation (GECCO), Madrid, Spain, 265–272.
- Morgan, R. and Gallagher, M. (2015). Analysing and Characterising Optimization Problems Using Length Scale. In Soft Computing, pages 1 – 18.
- Muñoz, M. A., Sun, Y., Kirley, M., & Halgamuge, S. K. (2015): Algorithm selection for black-box continuous optimization problems: A survey on methods and challenges, Information Sciences 317, October, Pages 224-245.
- Muñoz, M.A., Kirley, M., & Halgamuge, S.K. (2015): Exploratory Landscape Analysis of Continuous Space Optimization Problems Using Information Content, in Evolutionary Computation, IEEE Transactions on , vol.19, no.1, Pages 74-87, Feb. 2015.
- Kerschke, P., Preuss, M., Hernández, C., Schütze, O., Sun, J., Grimme, C., Rudolph, G., Bischl, B., & Trautmann, H. (2014). Cell Mapping Techniques for Exploratory Landscape Analysis. In Tantar, A., Tantar, E., Sun, J., Zhang, W., Ding, Q., Schütze, O., Emmerich, M., Legrand, P., Del, M. P., & Coello, C. C. (Eds.), EVOLVE — A Bridge between Probability, Set Oriented Numerics, and Evolutionary Computation V (pp. 115–131). Advances in Intelligent Systems and Computing: Vol. 288. Cham: Springer International Publishing.
- Bischl, B., Mersmann, O., Trautmann, H. and Preuss, M. (2012). Algorithm selection based on exploratory landscape analysis and cost-sensitive learning. In Proceedings of the 14th Annual Conference on Genetic and Evolutionary Computation (GECCO), pages 313-320, New York, NY, USA
- Mersmann, O., Bischl, B., Trautmann, H., Preuss, M., Weihs, C. and Rudolph, G. (2011). Exploratory landscape analysis. In Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation (GECCO), pages 829–836, New York, NY, USA, ACM.
Exploratory Landscape Analysis / Algorithm Selection for Combinatorial Optimization Problems
- Kerschke, P., Kotthoff, L., Bossek, J., Hoos, H., & Trautmann, H. (2017). Leveraging TSP Solver Complementarity through Machine Learning. In Evolutionary Computation Journal, MIT Press, 1-24
- Kotthoff, L., Kerschke, P., Hoos, H., & Trautmann, H. (2015). Improving the State of the Art in Inexact TSP Solving using Per-Instance Algorithm Selection. In Dhaenens, C., Jourdan, L., & Marmion, M.-E. (Eds.), In Proceedings of 9th International Conference on Learning and Intelligent Optimization, Springer, 202–217
- Mersmann, O., Bischl, B., Bossek, J., Trautmann, H., Wagner, M. and Neumann, F. (2012). Local search and the traveling salesman problem: A feature-based characterization of problem hardness. In Learning and Intelligent Optimization (LION), 6th International Conference, Lecture Notes in Computer Science. Springer, 115-129.
- Nallaperuma, S., Wagner, M., Neumann, F., Bischl, B., Mersmann, O., & Trautmann, H. (2013). A Feature-Based Comparison of Local Search and the Christofides Algorithm for the Travelling Salesperson Problem. In Proceedings of the FOGA, Adelaide, Australia, 147–160.
- Mersmann, O., Bischl, B., Trautmann, H., Wagner, M., Bossek, J., & Neumann, F. (2013). A Novel Feature-Based Approach to Characterize Algorithm Performance for the Traveling Salesman Problem. Annals of Mathematics and Artificial Intelligence, 69, 151–182.
R-Packages
- FLACCO (official release on CRAN or developers version on github):
This package is a collection of exploratory landscape features for Black-Box-Optimization Problems. - smoof (official release on CRAN or developers version on github):
This package contains lots of single- and multi-objective test functions, which are widely used within the literature for benchmarking numerical optimization algorithms. - salesperson (developers version on github):
The salesperson package contains function to compute a large set of features for TSP problem instances. Moreover, it contains an interface to a wide variety of TSP solvers including the exact state-of-the-art solver concorde and the inexact state-of-the-art solvers LKH and EAX.