BOCIA, the International Workshop on Benchmarking of Computational Intelligence Algorithms, a part of the Tenth International Conference on Advanced Computational Intelligence (ICACI 2018), is cordially inviting the submission of original and unpublished research papers.
Computational Intelligence (CI) is a huge and expanding field which is rapidly gaining importance, attracting more and more interests from both academia and industry. It includes a wide and ever-growing variety of optimization and machine learning algorithms, which, in turn, are applied to an even wider and faster growing range of different problem domains. For all of these domains and application scenarios, we want to pick the best algorithms. Actually, we want to do more, we want to improve upon the best algorithm. This requires a deep understanding of the problem at hand, the performance of the algorithms we have for that problem, the features that make instances of the problem hard for these algorithms, and the parameter settings for which the algorithms perform the best. Such knowledge can only be obtained empirically, by collecting data from experiments, by analyzing this data statistically, and by mining new information from it.
Benchmarking is the engine driving research in the fields of optimization and machine learning for decades, while its potential has not been fully explored. Benchmarking the algorithms of Computational Intelligence is an application of Computational Intelligence itself! This workshop wants to bring together experts on benchmarking of optimization and machine learning algorithms. It provides a common forum for them to exchange findings and to explore new paradigms for performance comparison.
The topics of interest for this workshop include, but are not limited to:
mining of higher-level information from experimental results
modelling of algorithm behaviors and performance
visualizations of algorithm behaviors and performance
statistics for performance comparison (robust statistics, PCA, ANOVA, statistical tests, ROC, …)
evaluation of real-world goals such as algorithm robustness, reliability, and implementation issues
theoretical results for algorithm performance comparison
comparison of theoretical and empirical results
new benchmark problems
automatic algorithm configuration
algorithm selection
the comparison of algorithms in "non-traditional" scenarios such as
multi- or many-objective domains
parallel implementations, e.g., using GGPUs, MPI, CUDA, clusters, or running in clouds
large-scale problems or problems where objective function evaluations are costly
dynamic problems or where the objective functions involve randomized simulations or noise
deep learning and big data setups
comparative surveys with new ideas on
dos and don'ts, i.e., best and worst practices, for algorithm performance comparison
tools for experiment execution, result collection, and algorithm comparison
benchmark sets for certain problem domains and their mutual advantages and weaknesses
03月29日
2018
03月31日
2018
初稿截稿日期
初稿录用通知日期
终稿截稿日期
注册截止日期
留言