征稿已开启

查看我的稿件

注册已开启

查看我的门票

已截止
活动简介

This  full-day workshop,  co-located with the  HCOMP2018 Conference , aims to bring together a latent  community  of researchers who treat  disagreement (and subjectivity and ambiguity)  as signal, rather than noise. Workshop, induce and derive value from uncertainty, ambiguity and Disagreement. The workshop will include  invited talks, short technical talks and a discussion of medium- and long-term challenges  to fuel future work. We  invite applicant from fields such as computer science, information sciences, law, communication science and political science , as Well as those chief working on Human computation  and crowd   sourcing . Solutions to these conditioned problems will benefit from a diverse set of perspectives.

Ambiguity  creates uncertainty in practically every facet of crowdsourcing. This includes the information presented to workers as part of a task, the instructions for what to with it, and the information they are asked to provide. Also lexical ambiguities, ambiguity can result from missing Details , contradictions and subjectivity. Subjectivity may stem from differences in cultural context, life experiences, or individual perception of hard-to-quantify properties. All of these can leave leave workers with conflicting interpretations, leading to results that requesters—including the end-users Of crowd-powered systems—would regard as "wrong".

Historically, The Human Computation Community has largely controlled the Attributed  disagreement  to Low-Quality Workers. This LED to Mathematical Approaches Intended to Minimize The Supposed Noise. Strategies, included aggregation (EG, Majority, Expectation maximization), Linguistic Approaches, Statistical Filtering and Incentive Design (Among Many others). These are all executed after the data is collected.

Based on interaction design and computer-supported collaborative work to refine task designs until  disagreement  is minimized. This is akin to the methodology used by linguistic annotation and social content analysis of task refinement using inter rater reliability ( Krippendorff, 2013 ). Focus is on minimizing the perceived  ambiguity  or  subjectivity  before the data has been collected . 

组委会

Lora Aroyo, Vrije Universiteit Amsterdam
Anca Dumitrache, Vrije Universiteit Amsterdam
Praveen Paritosh, Google
Alex Quinn, Purdue University
Chris Welty, Google

征稿信息

重要日期

2018-06-06
初稿截稿日期
2018-06-13
初稿录用日期

We invite and encourage inter disciplinary submissions from the broad spectrum of crowdsourcing and human computation research in fields and application areas processing with (but not limited to) the topics below:

  • Interaction/relation between disagreement, ambiguity and subjectivity
  • Costs and forces introduced by ambiguity
  • Designing tasks with high subjectivity and low inter-rater reliability (e.g., semantic, linguistic, commonsense, moral judgments.)
  • Better metrics for characterizing disagreement (over traditional inter-rater reliability)
  • Ambiguity in human computation task design, how to identify it and what to do about it
  • Theoretical ambiguity-aware frameworks for collecting data
  • Teasing apart different sources of ambiguity
  • Best practices for collecting subjective data
  • Benchmarks and datasets for studying ambiguity
  • Grand challenges that will further our understanding of subjectivity, ambiguity and disagreement
  • Disagreement, ambiguity, subjectivity and their representation and interaction in different content modalities, e.g. text, images, videos, audio
  • Interdisciplinary perspectives on disagreement, ambiguity and subjectivity, e.g., law, political science, humanities

作者指南

Submissions may present ideas, results from experimental studies, methodologies, work-in-progress and/or applications of systems that explore the topics of subjectivity, ambiguity and disagreement  in crowdsourcing and human computation in relation to computational systems, applications, or services. All submissions should follow the main conference formatting guidelines as follows:

  • Paper Length. Long papers of up to 8 pages, short papers & demos of up to 4 pages may be submitted. References can extend unlimitedly.
  • Formatting. Submissions must be formatted in AAAI two-column, camera-ready style. (See the AAAI 2018 Author Kit). Papers must be in trouble-free, high-resolution PDF format, formatted for US Letter (8.5″ x 11″) paper, using Type 1 or TrueType fonts.
  • Supplemental Materials. Authors are invited to provide supplemental materials such as data, code, executables, videos, etc.
  • Non-archival. Accepted full papers will be published on the SAD2018 workshop website. However, submissions are not intended to be considered archival, or to preclude submission of the reported work to archival journals.
  • Encore papers. We also invite submissions of “encore papers” – ie relevant work that has previously been published, although not at HCOMP 2018. These will be presented at the conference in the same manner, but they will not be posted on the workshop web Site (we will instead link to the original source).
  • Presentation. Each accepted long paper will be presented in a 15-minute presentation (including questions) at the workshop.
  • Poster. Authors of all accepted papers will be asked to submit a poster summarizing Their submission.
  • Submission. Papers should be submitted via easyChair.
  • At least one author of each accepted paper must register for the conference to present the work or acceptance will be withdrawn.
留言
验证码 看不清楚,更换一张
全部留言
重要日期
  • 会议日期

    07月02日

    2018

    07月05日

    2018

  • 06月06日 2018

    初稿截稿日期

  • 06月13日 2018

    初稿录用通知日期

  • 07月05日 2018

    注册截止日期

承办单位
Lora Aroyo, Vrije Universiteit Amsterdam
Anca Dumitrache, Vrije Universiteit Amsterdam
Praveen Paritosh, Google
Alex Quinn, Purdue University
Chris Welty, Google
移动端
在手机上打开
小程序
打开微信小程序
客服
扫码或点此咨询