Enhancing Continuous Authentication in Virtual Reality: The Role of Adaptive Eye-Tracking Models
编号:214
访问权限:仅限参会人
更新:2025-12-24 14:19:01 浏览:45次
拓展类型2
摘要
Static authentication mechanisms (e.g., passwords, biometrics) check users only at session beginning, hence subject to hijacking. Constant authentication is more secure as it monitors the users activities throughout a session. One of the potential applications of non-intrusive continuous authentication biometric in virtual reality (VR) is eye-tracking due to the availability of in-depth gaze information. Models based on gazes, however, decadent as time passes as there are changes in behavior patterns. This paper applies the 26-month GazeBaseVR dataset by comparing Transformer Encoder, DenseNet, and XGBoost in short- and long-term authentication. With Transformer Encoder and DenseNet yielding up to 97% of short-term accuracy, the accuracy reduces to 1.78% after 26 months of time have passed. The model can be retrained every now and then on up-to-date gaze data and regains more than 95% accuracy. The findings highlight the importance of adaptive learning in guaranteeing continual gaze-based authentication during a longer duration of time. It is possible that future work would make the best use of retraining time periods and apply to other behavioral indicators, such as head and hand gestures, to achieve maximum resilience of long-term VR authentication
关键词
Continuous authentication, Eye-tracking, Virtual reality, Gaze biometrics, Adaptive models, Machine learning
稿件作者
KRISHNAKANT Dixit
Gla University
yogendra kumar
GLA University
发表评论