征稿已开启

查看我的稿件

注册已开启

查看我的门票

已截止
活动简介

Virtual and augmented reality technologies and their applications in education, engineering, healthcare, and entertainment offer potentially unprecedented benefits to society. Thus, it is important for the research community to address social and economic imbalances so that people from diverse backgrounds have similar opportunities when it comes to accessing and using virtual and augmented reality. While the current cost of stationary VR systems is, and for a while will be, prohibitive to most of the world’s population, mobile-based head mounted displays are the means for democratization of access to VR and AR due to ever-increasing proliferation of mobile devices.

However, current mobile-based VR systems do not yet deliver a fluid, immersive experience comparable to stationary VR systems. While their quality of graphics and audio is quickly improving, one technological barrier that is not sufficiently addressed is their interaction. While stationary VR systems are often equipped with specialized spatial input devices, such as hand motion and pose tracking sensors, mobile-based HMDs provide, at best, voice input, head orientation, and a touchpad with a button or two. Mini-keyboards, touchpads, game controllers, and other standard input devices may be used in mobile-based VR, but they are neither effective in 3D interaction scenarios, nor do they provide input information necessary for adequate representation of the user in many application domains. In consequence, although the promise of virtual and augmented reality is to immerse the user in artificially generated, interactive 3D environments, mobile-based VR enables passive viewership rather than active participation.

The goal of this workshop is to facilitate discussions that will identify and categorize available input sources and interaction techniques that enhance the user’s immersion through mobile-based VR and AR, and thus positively impact equitable access to VR and AR. The expected contributions may examine the feasibility of using sensor data provided by low-cost, everyday devices to enable capturing of the user’s movement, identification and reconstruct the user’s environment, and recognition the user’s gestures, behaviors, poses, facial expressions, gaze direction, or emotional state. The proposed research does not need to be limited to the existing ecology of everyday devices, but may also uncover novel, low-cost input solutions that utilize EEG, EMG sensors, stretch bands, micro robots, audio and video input, and intelligent clothing, among other forms of input.

We invite authors to submit position papers, preliminary designs and research results, novel concepts, demos, prototype applications, or case studies. Submission length may vary from 2 to 6 pages (including references).

征稿信息

征稿范围

  • Novel interaction techniques that enable efficient performance of typical 3D interactions, such as navigation, travel, and object manipulation, by utilizing sensors that are already present in everyday devices

  • Low-cost algorithms, including Machine Learning approaches, for environment reconstruction, or recognition of gestures, facial expressions, emotional state, and atomic activities from sensors (IMU, EEG, Myo, microphone, camera)

  • Multimodal interactions, and sensor fusion and filtering algorithms that collect data from multiple types of sensors, or across multiple interconnected everyday devices.

  • Prototype applications and systems in any domains, including but not limited to gaming, education, engineering, and healthcare

  • Evaluation and validation methodologies that focus on ergonomics and reduction of fatigue as well as qualitative and quantitative characteristics of interactions with mobile-based VR and AR

留言
验证码 看不清楚,更换一张
全部留言
重要日期
  • 03月19日

    2017

    会议日期

  • 03月19日 2017

    注册截止日期

联系方式
移动端
在手机上打开
小程序
打开微信小程序
客服
扫码或点此咨询