征稿已开启

查看我的稿件

注册已开启

查看我的门票

已截止
活动简介

 Seeing the world from diverse perspectives provides us a unique opportunity to understand it better. Today, we live in a world with devices ranging from first-person vision systems (such as smart phones) to space-borne imaging platforms (such as satellites) sensing the world around us from wildly different perspectives and with diverse data modalities. In addition, advanced remote sensing technologies such as hyperspectral imaging and synthetic aperture radar can capture information beyond visible spectrum. The images captured from these different perspectives are complementary, and so analyzing them together provides novel solutions for understanding and describing the world better. The key to integrating these different perspectives is location.

The problem of visual analysis of satellite to street view imagery arises in a variety of real-world applications. Consumers may be interested in determining when and where an image was taken, who is in the image, what the different objects in the depicted scene are, and how they are related to each other. Local government agencies may be interested in using large-scale imagery to automatically obtain and index useful geographic and geological features and their distributions in a region of interest. Economic forecasters might be interested in how much business did a particular retail store conduct by counting cars in the parking lot over the course of the year. The military may want to know the location of terrorist camps or activities near restricted zones. Relief agencies may be interested in knowing the hardest hit areas after a natural disaster. Similarly, local businesses may utilize content statistics to target their marketing based on the ‘where’, ‘what’, and ‘when’ that may automatically be extracted during visual analysis of satellite to street view imagery.

Despite recent advances in computer vision and large-scale indexing techniques, fine-grained fusion of data with different views of the geo-location remains a challenging task. The problem involves identifying, extracting, and indexing geo-informative features, discovering subtle overlapping geo-location cues from wildly diverse visual data, geometric modeling and reasoning, context-based reasoning, and exploitation and indexing of large-scale aerial and ground. Theoretical foundations from computer graphics, vision, photogrammetry, and robotics can be useful assets in solving the problem. We feel that due to the growing availability of geo-referenced images and videos, the time is right to investigate the research challenges and opportunities involved with jointly analyzing images and videos captured from different devices and from wildly varying perspectives but pointing to the same 3D point in space. Combining this heterogeneous visual data could lead to improved data organization strategies, event understanding systems, and transformative solutions for computer vision challenges. The focus of this workshop therefore is to explore techniques that can exploit the rich data provided by converging perspectives - images captured by first-person cameras and aerial images delivered by various air/space-borne sensors.

征稿信息

重要日期

2016-04-17
摘要截稿日期

征稿范围

This workshop invites contributions in the form of original papers to the following areas:

  • Complex event understanding through visual data fusion

  • Spatiotemporal integration of visual observations

  • First-person vision meets aerial vision

  • Registration of social network data with street view images

  • Integrating remote and proximate sensing land use/cover map classification

  • Scene Reconstruction from Multi-Dimensional and Multi-View Imagery

  • Understanding and Modeling Uncertainties in Visual and Geospatial Data

  • Semantic Generalization of Visual and Geospatial Data

  • Representation, Indexing, Storage, and Analysis of City-to-Earth Scale Models

  • Automated 3D Modeling Pipelines for Complex Large-Scale Architectures

  • Integrated Processing of Point Clouds, Image, and Video Data

  • Multi-Modal Visual Sensor Data Fusion

  • Design and Development of Architectures that Support Real-Time and Parallel Execution of Algorithms for Earth-Scale Geo-Localization

  • Scene Change Detection and Segment Classification

  • Rendering, Overlay and Visualization of Models, Semantic Labels and Imagery

  • Applications of Visual Analysis and Geo-Localization of Large-Scale Imagery

  • Datasets/ Model Validation/Algorithm Testing/Annotation Techniques

  • Matching information derived from ground level images to satellite/aerial images, GIS data, and DEM data

留言
验证码 看不清楚,更换一张
全部留言
重要日期
  • 07月01日

    2016

    会议日期

  • 04月17日 2016

    摘要截稿日期

  • 07月01日 2016

    注册截止日期

移动端
在手机上打开
小程序
打开微信小程序
客服
扫码或点此咨询