征稿已开启

查看我的稿件

注册已开启

查看我的门票

已截止
活动简介

In recent years, there has been a growing interest in algorithms that learn and use continuous representations for words, phrases, or documents in many natural language processing applications. Among many others, influential proposals that illustrate this trend include latent Dirichlet allocation, neural network based language models and spectral methods. These approaches are motivated by improving the generalization power of the discrete standard models, by dealing with the data sparsity issue and by efficiently handling a wide context. Despite the success of single word vector space models, they are limited since they do not capture compositionality. This prevents them from gaining a deeper understanding of the semantics of longer phrases, sentences and documents. Regarding this issue, some pertinent questions arise: should word/phrase/sentence representations be of the same sort? Could different linguistic levels require different modelling approaches ? Is compositionality determined by syntax, and if so, how do we learn/define it? Should word representations be fixed and obtained distributionally, or should the encoding be variable? Should word representations be task-specific, or should they be general?

征稿信息
留言
验证码 看不清楚,更换一张
全部留言
重要日期
  • 会议日期

    07月31日

    2015

    08月01日

    2015

  • 08月01日 2015

    注册截止日期

主办单位
国际计算语言学学会
联系方式
移动端
在手机上打开
小程序
打开微信小程序
客服
扫码或点此咨询