28 / 2021-12-06 17:49:00
Scalable Parallel Static Learning
static learning; parallel acceleration; GPU; multi-core CPU
终稿
LaiLiyang / Shantou University; Chinese Academy of Sciences
Static learning is a learning algorithm for finding additional implicit implications between gates in a netlist. In automatic test pattern generation (ATPG) the learned implications help recognize conflicts and redundancies early, and thus greatly improve the performance of ATPG. Though ATPG can further benefit from multiple runs of incremental or dynamic learning, it is only feasible when the learn-ing process is fast enough. In the paper, we study speeding up static learning through parallelization on heterogeneous computing platform, which includes multi-core microprocessors (CPUs), and graphics processing units (GPUs). We discuss the advantages and limitations in each of these architectures. With their specific features in mind, we propose two different parallelization strategies that are tailored to mul-ti-core CPUs and GPUs. Speedup and performance scalability of the two proposed parallel algorithms are analyzed. As far as we know, this is the first time that parallel static learning is studied in the literature.
重要日期
  • 会议日期

    12月11日

    2021

    12月12日

    2021

  • 08月18日 2021

    注册截止日期

主办单位
中国计算机学会
承办单位
中国计算机学会容错计算专业委员会
同济大学软件学院
历届会议
移动端
在手机上打开
小程序
打开微信小程序
客服
扫码或点此咨询