No Free Lunch Theorem and Bayesian probability theory: two sides of the same coin. Some implications for black-box optimization and metaheuristics

23 Nov 2013  ·  Loris Serafino ·

Challenging optimization problems, which elude acceptable solution via conventional calculus methods, arise commonly in different areas of industrial design and practice. Hard optimization problems are those who manifest the following behavior: a) high number of independent input variables; b) very complex or irregular multi-modal fitness; c) computational expensive fitness evaluation. This paper will focus on some theoretical issues that have strong implications for practice. I will stress how an interpretation of the No Free Lunch theorem leads naturally to a general Bayesian optimization framework. The choice of a prior over the space of functions is a critical and inevitable step in every black-box optimization.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here