Collaborative Information Seeking/Retrieval (CIS/CIR) has given rise to several challenges in terms of search behavior analysis, retrieval model formalization as well as interface design. However, the major issue of evaluation in CIS/CIR is still underexplored. The goal of this workshop is to investigate the evaluation challenges in CIS/CIR with the hope of building standardized evaluation frameworks, methodologies, and task specifications that would foster and grow the research area (in a collaborative fashion). These last years and particularly since 2005, CIS and CIR have became emerging topics that have been addressed in several IR and IS conferences including CIKM and SIGIR conferences. While the potential of collaboration has been highlighted with respect to individual settings, other challenges remain and need to be thoroughly explored. Despite most of experimental evaluations have been done with the objective of highlighting the synergic effect of the proposed contributions, there is an important need for the future to discuss about what should be evaluated in terms of collaboration aspects (e.g. cognitive effort, mutual beneficial goal satisfaction, collective relevance…). Moreover, it does not exist standard framework as proposed in ad-hoc information retrieval through the evaluation campaign, as those proposed by TREC, INEX, CLEF. Our workshop would both interest and benefit from researchers with a complementary expertise that would cover all the aspects dealing with the evaluation in CIS and CIR.
10月23日
2015
会议日期
注册截止日期
留言