活动简介
We now live in a world that can be thought of a massive heterogeneous visual sensor network with devices ranging from first-person vision systems to unmanned aerial imaging platforms capable of sensing the world around us from wildly different perspectives. The images captured from these different perspectives are complementary, and so fusing them is a logical approach for improving vision-based object and event understanding. For example, while a wearable camera mounted on a subject can capture an event closely and in great detail, an aerial imaging system observing the event from above can provide the geospatial context necessary to interpret the event. The key to integrating these different perspectives—to get them to converge—is, of course, location. We feel that due to the growing availability of geo-referenced images and videos, the time is right to investigate the research challenges and opportunities involved with jointly analyzing images and videos captured from different devices and from wildly varying perspectives but of the same location. Combining this heterogeneous visual data could lead to improved data organization strategies, event understanding systems, and transformative solutions for computer vision challenges. The focus of this workshop therefore is to explore techniques that can exploit the rich data provided by converging perspectives, such as first-person cameras, street view sensors, mobile phones, and aerial images.
征稿信息

征稿范围

This workshop invites contributions in the form of original papers to the following areas: -Complex event understanding through visual data fusion -Advanced image indexing and retrieval -Spatio-temporal integration of visual observations -First-person vi
留言
验证码 看不清楚,更换一张
全部留言
重要日期
  • 12月08日

    2013

    会议日期

  • 12月08日 2013

    注册截止日期

主办单位
IEEE Computer Society
联系方式
移动端
在手机上打开
小程序
打开微信小程序
客服
扫码或点此咨询