征稿已开启

查看我的稿件

注册已开启

查看我的门票

已截止
活动简介

Intelligent systems built upon complex machine learning and data mining models (e.g., deep neural networks) have shown superior performances on various real-world applications. However, their effectiveness is limited by the difficulty in interpreting the resultant prediction mechanisms or how the results are obtained. In contrast, the results of many simple or shallow models, such as rule-based or tree-based methods, are explainable but not sufficiently accurate. Model interpretability enables the systems to be clearly understood, properly trusted, effectively managed and widely adopted by end users. Interpretations are necessary in applications such as medical diagnosis, fraud detection and object recognition where valid reasons would be significantly helpful, if not necessary, before taking actions based on predictions. This workshop is about interpreting the prediction mechanisms or results of the complex computational models for data mining by taking advantage of simple models which are easier to understand. We wish to exchange ideas on recent approaches to the challenges of model interpretability, identify emerging fields of applications for such techniques, and provide opportunities for relevant interdisciplinary research or projects.

征稿信息

征稿范围

Topic areas for the workshop include (but are not limited to) the following:

  • Interpretable machine learning

  • Interpretable deep learning

  • Information fusion and knowledge transfer

  • Anomaly detection with interpretability

  • Healthcare analytics

  • Social computing

  • Computer vision

  • Human-centric computing

  • Visual analytics

  • Human computer interaction in data mining

  • Interactive modeling between human and intelligent systems

留言
验证码 看不清楚,更换一张
全部留言
重要日期
  • 会议日期

    11月06日

    2017

    11月10日

    2017

  • 11月10日 2017

    注册截止日期

主办单位
美国计算机学会
移动端
在手机上打开
小程序
打开微信小程序
客服
扫码或点此咨询