征稿已开启

查看我的稿件

注册已开启

查看我的门票

已截止
活动简介

BOCIA, the International Workshop on Benchmarking of Computational Intelligence Algorithms, a part of the Tenth International Conference on Advanced Computational Intelligence (ICACI 2018), is cordially inviting the submission of original and unpublished research papers.

Computational Intelligence (CI) is a huge and expanding field which is rapidly gaining importance, attracting more and more interests from both academia and industry. It includes a wide and ever-growing variety of optimization and machine learning algorithms, which, in turn, are applied to an even wider and faster growing range of different problem domains. For all of these domains and application scenarios, we want to pick the best algorithms. Actually, we want to do more, we want to improve upon the best algorithm. This requires a deep understanding of the problem at hand, the performance of the algorithms we have for that problem, the features that make instances of the problem hard for these algorithms, and the parameter settings for which the algorithms perform the best. Such knowledge can only be obtained empirically, by collecting data from experiments, by analyzing this data statistically, and by mining new information from it.

Benchmarking is the engine driving research in the fields of optimization and machine learning for decades, while its potential has not been fully explored. Benchmarking the algorithms of Computational Intelligence is an application of Computational Intelligence itself! This workshop wants to bring together experts on benchmarking of optimization and machine learning algorithms. It provides a common forum for them to exchange findings and to explore new paradigms for performance comparison.

征稿信息

重要日期

2017-11-15
初稿截稿日期
2017-12-15
初稿录用日期
2018-01-15
终稿截稿日期

征稿范围

The topics of interest for this workshop include, but are not limited to:

  • mining of higher-level information from experimental results

  • modelling of algorithm behaviors and performance

  • visualizations of algorithm behaviors and performance

  • statistics for performance comparison (robust statistics, PCA, ANOVA, statistical tests, ROC, …)

  • evaluation of real-world goals such as algorithm robustness, reliability, and implementation issues

  • theoretical results for algorithm performance comparison

  • comparison of theoretical and empirical results

  • new benchmark problems

  • automatic algorithm configuration

  • algorithm selection

  • the comparison of algorithms in "non-traditional" scenarios such as

    • multi- or many-objective domains

    • parallel implementations, e.g., using GGPUs, MPI, CUDA, clusters, or running in clouds

    • large-scale problems or problems where objective function evaluations are costly

    • dynamic problems or where the objective functions involve randomized simulations or noise

    • deep learning and big data setups

  • comparative surveys with new ideas on

    • dos and don'ts, i.e., best and worst practices, for algorithm performance comparison

    • tools for experiment execution, result collection, and algorithm comparison

    • benchmark sets for certain problem domains and their mutual advantages and weaknesses

留言
验证码 看不清楚,更换一张
全部留言
重要日期
  • 会议日期

    03月29日

    2018

    03月31日

    2018

  • 11月15日 2017

    初稿截稿日期

  • 12月15日 2017

    初稿录用通知日期

  • 01月15日 2018

    终稿截稿日期

  • 03月31日 2018

    注册截止日期

移动端
在手机上打开
小程序
打开微信小程序
客服
扫码或点此咨询