Scalable Parallel Static Learning
编号:89
访问权限:仅限参会人
更新:2021-12-07 10:19:43 浏览:201次
口头报告
摘要
Static learning is a learning algorithm for finding additional implicit implications between gates in a netlist. In automatic test pattern generation (ATPG) the learned implications help recognize conflicts and redundancies early, and thus greatly improve the performance of ATPG. Though ATPG can further benefit from multiple runs of incremental or dynamic learning, it is only feasible when the learn-ing process is fast enough. In the paper, we study speeding up static learning through parallelization on heterogeneous computing platform, which includes multi-core microprocessors (CPUs), and graphics processing units (GPUs). We discuss the advantages and limitations in each of these architectures. With their specific features in mind, we propose two different parallelization strategies that are tailored to mul-ti-core CPUs and GPUs. Speedup and performance scalability of the two proposed parallel algorithms are analyzed. As far as we know, this is the first time that parallel static learning is studied in the literature.
关键词
static learning; parallel acceleration; GPU; multi-core CPU
稿件作者
LaiLiyang
Shantou University; Chinese Academy of Sciences
发表评论