As high-performance computing hardware incorporates increasing levels of heterogeneity, hierarchical organization, and complexity, parallel programming techniques necessarily grow in complexity or in their ability to abstract away complexity. The concurrent development of multi- and many-core processors, deep memory hierarchies, and accelerators and the variety of ways to combine these makes the low-level language route unmanageable for domain experts tasked with developing applications. The technologies that a competent developer might be expected to master and combine include MPI plus CUDA, OpenMP, and OpenACC, most commonly denoted MPI + X. This approach inherently saddles the developer with low-level details that might better be handled by high-level abstractions.
Higher-level parallel programming models offer rich sets of abstractions that feel natural in the intended applications. Such languages and tools include (Fortran, UPC, Julia), systems for large-scale data processing and analytics (Spark, Tensorflow, Dask), and frameworks and libraries that extend existing languages (Charm++, UPC++, Coarray C++, HPX, Legion, Global Arrays). While there are tremendous differences between these approaches, all strive to support better programmer abstractions for concerns such as data parallelism, task parallelism, dynamic load balancing, and data placement across the memory hierarchy.
This workshop will bring together applications experts who will present concrete practical examples of using such alternatives to MPI in order to illustrate the benefits of high-level approaches to scalable programming. The workshop expands upon the two similar workshops, PAW16 and PAW17, by broadening the theme beyond partitioned global address space languages. We invite you to take part in the Parallel Applications Workshop, Alternatives To MPI, and to join this vibrant and diverse community of researchers and developers.
The scope of the PAW-ATM workshop is to provide a forum for exhibiting case studies of higher-level programming models as MPI alternatives in the context of applications as a means of better understanding applications of MPI alternatives. We encourage the submission of papers and talks detailing such applications, including characterizations of scalability and performance, of expressiveness and programmability, as well as any downsides or areas for improvement in existing higher-level programming models. In addition to informing other application programmers about the potential that is available through MPI alternatives, the workshop is designed to communicate these experiences to compiler vendors, library developers, and system architects in order to achieve broader support for high-level approaches to scalable programming.
We also specifically encourage submissions covering big data analytics, deep learning, and other novel and emerging application areas, beyond well established HPC domains.
Topics include, but are not limited to:
11月16日
2018
会议日期
初稿截稿日期
注册截止日期
留言