IPDPS is an international forum for engineers and scientists from around the world to present their latest research findings in all aspects of parallel computation. In addition to technical sessions of submitted paper presentations, the meeting offers workshops, tutorials, and commercial presentations & exhibits.
IPDPS represents a unique international gathering of computer scientists from around the world. Now, more than ever, we prize this annual meeting as a testament to the strength of international cooperation in seeking to apply computer science technology to the betterment of our global village.
Sponsor Type:1
GENERAL CO-CHAIRS
David Bader (New Jersey Institute of Technology, USA)
Aparna Chandramowlishwaran (University of California, Irvine, USA)
PROGRAM CHAIR
Karen Karavanic (Portland State University, USA
WORKSHOPS CHAIR AND VICE CHAIR
Erik Saule (University of North Carolina Charlotte, USA)
Jaroslaw (Jaric) Zola (University at Buffalo, USA)
PHD FORUM CO-CHAIRS
Sanjukta Bhowmick (University of North Texas, USA)
Akshaye Dhawan (Bloomberg LP, USA)
PROCEEDINGS CHAIR
Kyle Chard (University of Chicago, USA)
PROCEEDINGS VICE-CHAIR: Workshops
Zhuozhao Li (University of Chicago, USA)
SOCIAL MEDIA CHAIR
Fernanda Foertter (The BioTeam, Oak Ridge, Tennessee, USA)
FINANCE CHAIR
Bill Pitts (Retired IEEE Volunteer, USA)
PRODUCTION CHAIR
Sally Jelinek Westrom (EDA, Inc., USA)
TCPP CHAIR
Anne Benoit (ENS Lyon, France)
Topics of interest include but are not limited to the following topic areas:
Algorithms: Parallel and distributed computing theory and algorithms. Design and analysis of novel numerical and combinatorial parallel algorithms; reputation and incentive compatible design for distributed protocols and for distributed resource management; communication and synchronization on parallel and distributed systems; parallel algorithms handling power, mobility, and resilience; algorithms for cloud computing; algorithms for edge and fog computing; machine learning algorithms; domain-specific parallel and distributed algorithms; randomization in distributed algorithms and block-chain protocols.
Experiments: Experiments and practice in parallel and distributed computing. Design and experimental evaluation of applications of parallel and distributed computing in simulation and analysis; experiments on the use of novel commercial or research architectures, accelerators, neuromorphic and quantum architectures, and other non-traditional systems; performance modeling and analysis of parallel and distributed systems; innovations made in support of large-scale infrastructures and facilities; methods for and experiences allocating and managing system and facility resources.
Programming Models & Compilers: Programming models, compilers and runtimes for parallel and distributed applications and systems. Parallel programming paradigms, models and languages; compilers, runtime systems, programming environments and tools for the support of parallel programming; parallel software development and productivity.
System Software: System software and middleware for parallel and distributed systems. System software support for scientific workflows (including in-situ workflows); storage and I/O systems; system software for resource management, job scheduling, and energy-efficiency; frameworks targeting cloud and distributed systems; system software support for accelerators and heterogeneous HPC computing systems; interactions between the OS, runtime, compiler, middleware, and tools; system software support for fault tolerance and resilience; containers and virtual machines; system software supporting data management, scalable data analytics, machine learning, and deep learning; specialized operating systems and runtime systems for high performance computing and exascale systems; system software for future novel computing platforms including quantum, neuromorphic, and bio-inspired computing.
Architecture: Architectures for instruction-level and thread-level parallelism; manycore, multicores, accelerators, domain-specific and special-purpose architectures, reconfigurable architectures; memory technologies and hierarchies; volatile and non-volatile emerging memory technologies, solid-state devices; exascale system designs; data center and warehouse-scale architectures; novel big data architectures; network and interconnect architectures; emerging technologies for interconnects; parallel I/O and storage systems; power-efficient and green computing systems; resilience, security, and dependable architectures; performance modeling and evaluation; emerging trends for computing, machine learning, approximate computing, quantum computing, neuromorphic computing and analog computing.
Multidisciplinary: Papers that cross the boundaries of the tracks listed above and/or address the application of parallel and distributed computing concepts and solutions to other areas of science and engineering are encouraged and can be submitted to the multidisciplinary track. Papers focused on translational research are particularly encouraged. Contributions should either target two or more core areas of parallel and distributed computing, or advance the use of parallel and distributed computing in other areas of science and engineering. During the submission of multidisciplinary papers, authors should indicate the areas of focus of their paper.
06月17日
2021
06月21日
2021
初稿截稿日期
注册截止日期
留言