Several algorithms/techniques have been proposed and studied to solve such problems. With this context, these algorithms are assessed in one of two ways, viz. theoretical, and empirical analysis. In theoretical analysis, a principled methodology is carried out to derive an analytical bound of the (run-time) solution quality. After t evaluations/steps, the quality of the returned solution is evaluated by a loss/regret measure.
Alternatively, empirical analysis employs experimental simulations of the algorithm on complex problems, gaining an insight on the algorithm’s practicality/applicability on real-world problems. With this regard, most of the time, methods proposed to solve MOPs are benchmarked on a different set of problems under arbitrary budgets of function evaluation. We are interested in empirically assessing published/novel multi-objective optimization algorithms in a unified (constantly updated) framework.
We invite the multi-objective community to test their published/novel algorithms in solving 100 MOPs reported in the literature where the feasible decision space has simple bound constraints, i.e., problems for which X=[l,u] and l<u. The benchmark validates the efficacy of the algorithms by computing several quality indicators which are reported in terms of data profiles.
12月06日
2016
12月09日
2016
初稿截稿日期
终稿截稿日期
注册截止日期
留言