Leveraging Explainable Transformers for Robust Financial Time Series Forecasting
编号:204
访问权限:仅限参会人
更新:2025-12-24 14:17:59 浏览:2次
拓展类型2
摘要
Financial time series forecasting is essential for making well-informed financial decisions. Two examples of traditional models that tend to fail to represent intricate temporal dependencies and generate interpretable predictions are long short-term memory (LSTM) networks and autoregressive integrated moving average (ARIMA). To enhance the accuracy and interpretability of financial time series forecasting, this study introduces a new transformer-based method. The transformer model is more capable of prediction as it effectively models long-term dependency in financial information through self-attention mechanisms. Explainability methods such as SHAP (Shapley Additive Explanations) and attention heatmaps are employed within the research to solve the black-box problem of deep learning models by clarifying the decision process of the model. Large-scale experiments on various financial datasets show that the new transformer model is more robust and accurate in its predictions compared to baseline methods. The robustness of the model to market fluctuations is illustrated by its superior R-squared values and lower mean absolute percentage error (MAPE) for a variety of financial assets. The explainability framework also identifies important predictors, providing valuable insights to financial analysts and decision-makers. By merging cutting-edge transformer models with interpretability, our research helps advance the corpus of financial AI and provides credible and transparent financial forecasting
关键词
Financial time series, Transformer model, Explainability, SHAP, Market prediction, Self-attention
稿件作者
Rakesh Kumar
GLA University
Ashish Sharma
GLA University
发表评论