征稿已开启

查看我的稿件

注册已开启

查看我的门票

已截止
活动简介

Several algorithms/techniques have been proposed and studied to solve such problems. With this context, these algorithms are assessed in one of two ways, viz. theoretical, and empirical analysis. In theoretical analysis, a principled methodology is carried out to derive an analytical bound of the (run-time) solution quality. After t evaluations/steps, the quality of the returned solution is evaluated by a loss/regret measure.

Alternatively, empirical analysis employs experimental simulations of the algorithm on complex problems, gaining an insight on the algorithm’s practicality/applicability on real-world problems. With this regard, most of the time, methods proposed to solve MOPs are benchmarked on a different set of problems under arbitrary budgets of function evaluation. We are interested in empirically assessing published/novel multi-objective optimization algorithms in a unified (constantly updated) framework.

征稿信息

重要日期

2016-08-15
初稿截稿日期
2016-10-10
终稿截稿日期

征稿范围

We invite the multi-objective community to test their published/novel algorithms in solving 100 MOPs reported in the literature where the feasible decision space has simple bound constraints, i.e., problems for which X=[l,u] and l<u. The benchmark validates the efficacy of the algorithms by computing several quality indicators which are reported in terms of data profiles.

留言
验证码 看不清楚,更换一张
全部留言
重要日期
  • 会议日期

    12月06日

    2016

    12月09日

    2016

  • 08月15日 2016

    初稿截稿日期

  • 10月10日 2016

    终稿截稿日期

  • 12月09日 2016

    注册截止日期

移动端
在手机上打开
小程序
打开微信小程序
客服
扫码或点此咨询