Scalable Parallel Static Learning
编号:89 访问权限:仅限参会人 更新:2021-12-07 10:19:43 浏览:201次 口头报告

报告开始:2021年12月12日 10:00(Asia/Shanghai)

报告时间:15min

所在会场:[S2] 论文报告会场2 [S2.2] Session 2 集成电路测试

暂无文件

摘要
Static learning is a learning algorithm for finding additional implicit implications between gates in a netlist. In automatic test pattern generation (ATPG) the learned implications help recognize conflicts and redundancies early, and thus greatly improve the performance of ATPG. Though ATPG can further benefit from multiple runs of incremental or dynamic learning, it is only feasible when the learn-ing process is fast enough. In the paper, we study speeding up static learning through parallelization on heterogeneous computing platform, which includes multi-core microprocessors (CPUs), and graphics processing units (GPUs). We discuss the advantages and limitations in each of these architectures. With their specific features in mind, we propose two different parallelization strategies that are tailored to mul-ti-core CPUs and GPUs. Speedup and performance scalability of the two proposed parallel algorithms are analyzed. As far as we know, this is the first time that parallel static learning is studied in the literature.
关键词
static learning; parallel acceleration; GPU; multi-core CPU
报告人
LaiLiyang
Shantou University; Chinese Academy of Sciences

稿件作者
LaiLiyang Shantou University; Chinese Academy of Sciences
发表评论
验证码 看不清楚,更换一张
全部评论
重要日期
  • 会议日期

    12月11日

    2021

    12月12日

    2021

  • 08月18日 2021

    注册截止日期

主办单位
中国计算机学会
承办单位
中国计算机学会容错计算专业委员会
同济大学软件学院
历届会议
移动端
在手机上打开
小程序
打开微信小程序
客服
扫码或点此咨询