Hailed by some as the fourth paradigm in science, data-intensive science has brought a profound transformation to scientific research. Indeed, the data-driven discovery has already happened in various research fields, such as earth sciences, medical sciences, biology and physics, to name just a few. It is expected that a vast volume of scientific data captured by new instruments will be publically accessible for the purposes of continued and deeper data analysis. Big Data analytic will result in the development of many new theories and discoveries but will also require substantial computational resources in the process. However, many domain sciences still mostly rely on traditional experimental paradigms. It is often a major challenge to transform a solution obtained on a standalone server into a massively parallel one running on tens, hundreds, or even thousands of servers. It is a crucial issue to make the latest technology advancements in software and hardware accessible and usable to the domain scientists, especially those in the fields that traditionally lack computation and programming, but have nonetheless become the driving forces of scientific discovery. Fueled by the big data analytics needs, new computing and storage technologies are also in rapid development and pushing for new high-end hardware for big data problems. These new hardware brings new opportunities for performance improvement but also new challenges. While those technologies have the potential to greatly improve the capabilities of big data analytics, such potential are often not fully realized. Due to the cost, sophistications of those technology, and limited initial application support, the new technologies often seem remote to the end users and are not fully utilized in the academia years after their invention. It is therefore very important to make those technologies understood and accessible by data scientists in a timely manner. Meanwhile, comprehensive analytic software packages and programming environments, have become increasingly popular as open-source platforms for data analysis. Most data scientists have had experiences with small to medium data and now facing the challenges posed by Big Data. Those software not only provide collection of analytic methods but also have the potential to utilize new hardware transparently and reduce the efforts required of the end users. For example, R has traditionally been the programming language preferred by data scientists. Recently members of the R and HPC communities have tried to step up to big data with R, resulting in methods for effectively adapting R to a variety of high-performance and high-throughput computing technologies. Parallel to these developments, a family of software frameworks (e.g., Apache Spark, Airavata) has been developed for executing and managing computational jobs and workflows on distributed computing resources, while providing web-based science gateways to assist domain scientists to compose, manage, execute, and monitor big data applications and workflows composed of these services. This workshop on Advances in Software and Hardware for Big Data to Knowledge Discovery (ASH) aims to connect the latest hardware and software developments with the end users of big data. It focuses on the accessibility and applicability of the latest hardware and software to practical domain problems and hence directly facilitates domain researchers' data driven discovery. The issues in discussion include performance evaluation, optimizations, accessibility and usability of new technologies. The participants will consist of computer scientists, domain users, service providers, as well as technology inventors in industry. The constituents of the workshop will advance direct and productive communication between cyber-infrastructure specialists and data scientists who normally work separately.
10月30日
2014
会议日期
初稿截稿日期
注册截止日期
2016年12月05日 美国 Washington,USA
2016 第三届大数据知识发现软件和硬件进展研讨会2015年10月29日 美国
第二届软件和硬件大数据知识发现进展研讨会
留言