Computing in large-scale systems is shifting away from the traditional compute-centric model successfully used for many decades into one that is much more data-centric. This transition is driven by the evolving nature of what computing comprises, no longer dominated by the execution of arithmetic and logic calculations but instead becoming dominated by large data volume and the cost of moving data to the locations where computations are performed. Data movement impacts performance, power efficiency and reliability, three fundamental components of a system. These trends are leading to changes in the computing paradigm, in particular the notion of moving computation to the data in a so-called Near-Data Processing approach, which seeks to perform computations in the most appropriate location depending on where data resides and what needs to be extracted from that data. Examples already exist in systems that perform some computations closer to disk storage, leveraging the data streaming coming from the disks, filtering the data so that only useful items are transferred for processing in other parts of the system. Conceptually, the same principle can be applied throughout a system, by placing computing resources close to where data is located, and decomposing applications so that they can leverage such a distributed and potentially heterogeneous computing infrastructure. This workshop is intended to bring together experts from academia and industry to share advances in the development of Near-Data Processing systems principles, with emphasis on large-scale systems. This is the 3rd edition of this workshop. The first two editions were held at MICRO 2013 and 2014, and had over 60 attendees each. The workshop will consist of submitted papers and invited talks.
留言