征稿已开启

查看我的稿件

注册已开启

查看我的门票

已截止
活动简介
Evaluation is a cardinal issue in recommender systems; as in almost any other technical discipline, it highlights to a large extent the problems that need to be solved by the field and, hence, leads the way for algorithmic research and development in the community. Yet, in the field of recommender systems, there still exists considerable disparity in evaluation methods, metrics, and experimental designs as well as a significant mismatch between evaluation methods in the lab and what constitutes an effective recommendation for real users and businesses. This workshop aims at providing an informal forum to tackle such issues and move towards better understood and commonly agreed evaluation methodologies, allowing one to leverage the efforts and the workforce of the academic community on meaningful and relevant directions to real-world developments. REDD 2014 aims at gathering researchers and practitioners interested in better understanding the unmet needs of the field in terms of evaluation methodologies and experimental practices. The main goal of this workshop is to provide an informal setting for discussing and exchanging ideas, experiences, and viewpoints. REDD seeks to identify and better understand the current gaps in recommender system evaluation methodologies, help lay directions for progress in addressing them, and foster the consolidation and convergence of experimental methods and practices.
征稿信息

重要日期

2014-07-28
初稿截稿日期

征稿范围

We invite the submission of papers reporting original research, studies, advances, experiences, or work in progress in the scope of recommender system utility evaluation. The topics the workshop seeks to address include–though need not be limited to–the following: Recommendation quality dimensions Effective accuracy, ranking quality Novelty, diversity, unexpectedness, serendipity Utility, gain, cost, risk, benefit Robustness, confidence, coverage, ease of use, persuasiveness, etc. Matching metrics to tasks, needs, and goals User satisfaction, user perception, human factors Business-oriented evaluation Multiple objective optimization, user engagement Quality of service, quality of experience Evaluation methodology and experimental design Definition and evaluation of new metrics, studies of existing ones Adaptation of methodologies from related fields: IR, Machine Learning, HCI, etc. Evaluation theory Practical aspects of evaluation Offline and online experimental approaches Simulation-based evaluation Datasets and benchmarks Validation of metrics Efficiency and scalability Open evaluation platforms and infrastructures
留言
验证码 看不清楚,更换一张
全部留言
重要日期
  • 10月10日

    2014

    会议日期

  • 07月28日 2014

    初稿截稿日期

  • 10月10日 2014

    注册截止日期

主办单位
美国计算机学会
移动端
在手机上打开
小程序
打开微信小程序
客服
扫码或点此咨询