Nowadays, the abundance of data is changing from the way companies make business to the way governments take many decisions, from the way science is made in several knowledge areas to the way many individuals take daily decisions such as where to go or how to buy. During the last decade, tools and techniques emerged to support massive offline analysis of web scale datasets on many thousands of computers working as a single facility. However, the total amount of digital data being produced, stored, and transmitted around the world is growing exponentially. The wide diversity of data sources and formats (data variety) cannot be handled by traditional systems and techniques, raising new data management challenges. In many areas, applications need to collect data and produce answers with high frequency or low latency, e.g. to raise some alarm or take a decision within a few milliseconds. Furthermore, in scalable environments with hundreds or thousands of components, surviving to frequent failures is mandatory. Analytic processing and knowledge discovery in such scenarios demand scalable and efficient algorithms, able to handle the complexity and variety of data even under specific constraints (e.g., energy consumption, available memory, computational power, and networking capacity). Furthermore, sensor networks and the Internet-of-Things open new perspectives in terms of the amount and complexity of data to be managed.
10月22日
2014
10月23日
2014
初稿截稿日期
注册截止日期
留言