A Survey on explainable recommendation: methods, works, and challenges
Abstract
Explainable Recommendation Systems (ERS) not only provide suitable recommendations but also include clear explanations to enhance transparency and user trust. This paper surveys key explanation approaches in ERS, including model-based, post-hoc, and user-centric methods, and analyzes representative studies applying SHAP, LIME, PEPLER-D, GaVaMoE, and G-Refer. The results highlight several critical challenges, such as limited capability to model complex user preferences, high computational costs when using LLMs, hallucination phenomena in explanations, lack of standardized datasets and quantitative evaluation metrics, as well as potential risks to user data privacy. To address these issues, potential future directions are proposed, including optimizing computational cost and scalability, ensuring explanation consistency and quality, personalizing explanations for individual users, integrating multiple explanation methods for comprehensiveness, and developing privacy-preserving and ethical mechanisms for explainable recommendation systems. This study provides a systematic overview and offers directions for future research to improve the quality and practical applicability of ERS across various domains.
