Enhancing Continuous Authentication in Virtual Reality: The Role of Adaptive Eye-Tracking Models
编号:214 访问权限:仅限参会人 更新:2025-12-24 14:19:01 浏览:45次 拓展类型2

报告开始:2025年12月30日 16:45(Asia/Amman)

报告时间:15min

所在会场:[S9] Track 5: Emerging Trends of AI/ML [S9-2] Track 5: Emerging Trends of AI/ML

暂无文件

摘要
Static authentication mechanisms (e.g., passwords, biometrics) check users only at session beginning, hence subject to hijacking. Constant authentication is more secure as it monitors the users activities throughout a session. One of the potential applications of non-intrusive continuous authentication biometric in virtual reality (VR) is eye-tracking due to the availability of in-depth gaze information. Models based on gazes, however, decadent as time passes as there are changes in behavior patterns. This paper applies the 26-month GazeBaseVR dataset by comparing Transformer Encoder, DenseNet, and XGBoost in short- and long-term authentication. With Transformer Encoder and DenseNet yielding up to 97% of short-term accuracy, the accuracy reduces to 1.78% after 26 months of time have passed. The model can be retrained every now and then on up-to-date gaze data and regains more than 95% accuracy. The findings highlight the importance of adaptive learning in guaranteeing continual gaze-based authentication during a longer duration of time. It is possible that future work would make the best use of retraining time periods and apply to other behavioral indicators, such as head and hand gestures, to achieve maximum resilience of long-term VR authentication
关键词
Continuous authentication, Eye-tracking, Virtual reality, Gaze biometrics, Adaptive models, Machine learning
报告人
KRISHNAKANT Dixit
GLA University Gla University

稿件作者
KRISHNAKANT Dixit Gla University
yogendra kumar GLA University
发表评论
验证码 看不清楚,更换一张
全部评论
重要日期
  • 会议日期

    12月29日

    2025

    12月31日

    2025

  • 12月30日 2025

    报告提交截止日期

  • 12月30日 2025

    注册截止日期

  • 12月31日 2025

    初稿截稿日期

主办单位
国际科学联合会
承办单位
扎尔卡大学
历届会议
移动端
在手机上打开
小程序
打开微信小程序
客服
扫码或点此咨询